00:00:00.000 Started by upstream project "autotest-per-patch" build number 132530 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.023 The recommended git tool is: git 00:00:00.023 using credential 00000000-0000-0000-0000-000000000002 00:00:00.026 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.045 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.095 Using shallow fetch with depth 1 00:00:00.095 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.095 > git --version # timeout=10 00:00:00.156 > git --version # 'git version 2.39.2' 00:00:00.156 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.226 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.226 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.659 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.674 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.688 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.688 > git config core.sparsecheckout # timeout=10 00:00:03.700 > git read-tree -mu HEAD # timeout=10 00:00:03.717 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.737 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.737 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.815 [Pipeline] Start of Pipeline 00:00:03.832 [Pipeline] library 00:00:03.834 Loading library shm_lib@master 00:00:03.834 Library shm_lib@master is cached. Copying from home. 00:00:03.849 [Pipeline] node 00:00:18.852 Still waiting to schedule task 00:00:18.852 Waiting for next available executor on ‘vagrant-vm-host’ 00:25:21.367 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:25:21.369 [Pipeline] { 00:25:21.380 [Pipeline] catchError 00:25:21.381 [Pipeline] { 00:25:21.398 [Pipeline] wrap 00:25:21.409 [Pipeline] { 00:25:21.419 [Pipeline] stage 00:25:21.422 [Pipeline] { (Prologue) 00:25:21.447 [Pipeline] echo 00:25:21.448 Node: VM-host-WFP7 00:25:21.455 [Pipeline] cleanWs 00:25:21.464 [WS-CLEANUP] Deleting project workspace... 00:25:21.465 [WS-CLEANUP] Deferred wipeout is used... 00:25:21.471 [WS-CLEANUP] done 00:25:21.681 [Pipeline] setCustomBuildProperty 00:25:21.789 [Pipeline] httpRequest 00:25:22.123 [Pipeline] echo 00:25:22.125 Sorcerer 10.211.164.101 is alive 00:25:22.134 [Pipeline] retry 00:25:22.136 [Pipeline] { 00:25:22.148 [Pipeline] httpRequest 00:25:22.152 HttpMethod: GET 00:25:22.153 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:25:22.154 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:25:22.154 Response Code: HTTP/1.1 200 OK 00:25:22.155 Success: Status code 200 is in the accepted range: 200,404 00:25:22.155 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:25:22.301 [Pipeline] } 00:25:22.321 [Pipeline] // retry 00:25:22.330 [Pipeline] sh 00:25:22.615 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:25:22.634 [Pipeline] httpRequest 00:25:23.032 [Pipeline] echo 00:25:23.035 Sorcerer 10.211.164.101 is alive 00:25:23.046 [Pipeline] retry 00:25:23.048 [Pipeline] { 00:25:23.063 [Pipeline] httpRequest 00:25:23.068 HttpMethod: GET 00:25:23.069 URL: http://10.211.164.101/packages/spdk_c86e5b1821f2ac77b97aa0d4f25d3c02e876cf47.tar.gz 00:25:23.071 Sending request to url: http://10.211.164.101/packages/spdk_c86e5b1821f2ac77b97aa0d4f25d3c02e876cf47.tar.gz 00:25:23.071 Response Code: HTTP/1.1 200 OK 00:25:23.072 Success: Status code 200 is in the accepted range: 200,404 00:25:23.073 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_c86e5b1821f2ac77b97aa0d4f25d3c02e876cf47.tar.gz 00:25:25.344 [Pipeline] } 00:25:25.363 [Pipeline] // retry 00:25:25.372 [Pipeline] sh 00:25:25.660 + tar --no-same-owner -xf spdk_c86e5b1821f2ac77b97aa0d4f25d3c02e876cf47.tar.gz 00:25:28.955 [Pipeline] sh 00:25:29.238 + git -C spdk log --oneline -n5 00:25:29.238 c86e5b182 bdev/malloc: Extract internal of verify_pi() for code reuse 00:25:29.238 97329b16b bdev/malloc: malloc_done() uses switch-case for clean up 00:25:29.238 afdec00e1 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:25:29.238 b09de013a nvmf: Get metadata config by not bdev but bdev_desc 00:25:29.238 971ec0126 bdevperf: Add hide_metadata option 00:25:29.258 [Pipeline] writeFile 00:25:29.274 [Pipeline] sh 00:25:29.554 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:25:29.574 [Pipeline] sh 00:25:29.880 + cat autorun-spdk.conf 00:25:29.880 SPDK_RUN_FUNCTIONAL_TEST=1 00:25:29.880 SPDK_RUN_ASAN=1 00:25:29.880 SPDK_RUN_UBSAN=1 00:25:29.880 SPDK_TEST_RAID=1 00:25:29.880 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:29.886 RUN_NIGHTLY=0 00:25:29.888 [Pipeline] } 00:25:29.904 [Pipeline] // stage 00:25:29.919 [Pipeline] stage 00:25:29.922 [Pipeline] { (Run VM) 00:25:29.934 [Pipeline] sh 00:25:30.212 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:25:30.212 + echo 'Start stage prepare_nvme.sh' 00:25:30.212 Start stage prepare_nvme.sh 00:25:30.212 + [[ -n 3 ]] 00:25:30.212 + disk_prefix=ex3 00:25:30.212 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:25:30.212 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:25:30.212 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:25:30.212 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:25:30.212 ++ SPDK_RUN_ASAN=1 00:25:30.212 ++ SPDK_RUN_UBSAN=1 00:25:30.212 ++ SPDK_TEST_RAID=1 00:25:30.212 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:25:30.212 ++ RUN_NIGHTLY=0 00:25:30.212 + cd /var/jenkins/workspace/raid-vg-autotest 00:25:30.212 + nvme_files=() 00:25:30.212 + declare -A nvme_files 00:25:30.212 + backend_dir=/var/lib/libvirt/images/backends 00:25:30.212 + nvme_files['nvme.img']=5G 00:25:30.212 + nvme_files['nvme-cmb.img']=5G 00:25:30.212 + nvme_files['nvme-multi0.img']=4G 00:25:30.212 + nvme_files['nvme-multi1.img']=4G 00:25:30.212 + nvme_files['nvme-multi2.img']=4G 00:25:30.212 + nvme_files['nvme-openstack.img']=8G 00:25:30.212 + nvme_files['nvme-zns.img']=5G 00:25:30.212 + (( SPDK_TEST_NVME_PMR == 1 )) 00:25:30.212 + (( SPDK_TEST_FTL == 1 )) 00:25:30.212 + (( SPDK_TEST_NVME_FDP == 1 )) 00:25:30.212 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:25:30.212 + for nvme in "${!nvme_files[@]}" 00:25:30.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:25:30.212 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:25:30.212 + for nvme in "${!nvme_files[@]}" 00:25:30.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:25:30.212 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:25:30.212 + for nvme in "${!nvme_files[@]}" 00:25:30.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:25:30.212 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:25:30.212 + for nvme in "${!nvme_files[@]}" 00:25:30.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:25:30.212 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:25:30.212 + for nvme in "${!nvme_files[@]}" 00:25:30.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:25:30.212 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:25:30.212 + for nvme in "${!nvme_files[@]}" 00:25:30.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:25:30.212 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:25:30.212 + for nvme in "${!nvme_files[@]}" 00:25:30.212 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:25:30.212 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:25:30.471 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:25:30.471 + echo 'End stage prepare_nvme.sh' 00:25:30.471 End stage prepare_nvme.sh 00:25:30.482 [Pipeline] sh 00:25:30.809 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:25:30.809 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:25:30.809 00:25:30.809 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:25:30.809 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:25:30.809 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:25:30.809 HELP=0 00:25:30.809 DRY_RUN=0 00:25:30.809 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:25:30.809 NVME_DISKS_TYPE=nvme,nvme, 00:25:30.809 NVME_AUTO_CREATE=0 00:25:30.809 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:25:30.809 NVME_CMB=,, 00:25:30.809 NVME_PMR=,, 00:25:30.809 NVME_ZNS=,, 00:25:30.809 NVME_MS=,, 00:25:30.809 NVME_FDP=,, 00:25:30.809 SPDK_VAGRANT_DISTRO=fedora39 00:25:30.809 SPDK_VAGRANT_VMCPU=10 00:25:30.809 SPDK_VAGRANT_VMRAM=12288 00:25:30.809 SPDK_VAGRANT_PROVIDER=libvirt 00:25:30.809 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:25:30.809 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:25:30.809 SPDK_OPENSTACK_NETWORK=0 00:25:30.809 VAGRANT_PACKAGE_BOX=0 00:25:30.809 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:25:30.809 FORCE_DISTRO=true 00:25:30.809 VAGRANT_BOX_VERSION= 00:25:30.809 EXTRA_VAGRANTFILES= 00:25:30.809 NIC_MODEL=virtio 00:25:30.809 00:25:30.809 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:25:30.809 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:25:33.338 Bringing machine 'default' up with 'libvirt' provider... 00:25:34.275 ==> default: Creating image (snapshot of base box volume). 00:25:34.275 ==> default: Creating domain with the following settings... 00:25:34.275 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732641754_1e14f571cf57809928df 00:25:34.275 ==> default: -- Domain type: kvm 00:25:34.275 ==> default: -- Cpus: 10 00:25:34.275 ==> default: -- Feature: acpi 00:25:34.275 ==> default: -- Feature: apic 00:25:34.275 ==> default: -- Feature: pae 00:25:34.275 ==> default: -- Memory: 12288M 00:25:34.275 ==> default: -- Memory Backing: hugepages: 00:25:34.275 ==> default: -- Management MAC: 00:25:34.275 ==> default: -- Loader: 00:25:34.275 ==> default: -- Nvram: 00:25:34.275 ==> default: -- Base box: spdk/fedora39 00:25:34.275 ==> default: -- Storage pool: default 00:25:34.275 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732641754_1e14f571cf57809928df.img (20G) 00:25:34.275 ==> default: -- Volume Cache: default 00:25:34.275 ==> default: -- Kernel: 00:25:34.275 ==> default: -- Initrd: 00:25:34.275 ==> default: -- Graphics Type: vnc 00:25:34.275 ==> default: -- Graphics Port: -1 00:25:34.275 ==> default: -- Graphics IP: 127.0.0.1 00:25:34.275 ==> default: -- Graphics Password: Not defined 00:25:34.275 ==> default: -- Video Type: cirrus 00:25:34.275 ==> default: -- Video VRAM: 9216 00:25:34.275 ==> default: -- Sound Type: 00:25:34.275 ==> default: -- Keymap: en-us 00:25:34.275 ==> default: -- TPM Path: 00:25:34.275 ==> default: -- INPUT: type=mouse, bus=ps2 00:25:34.275 ==> default: -- Command line args: 00:25:34.275 ==> default: -> value=-device, 00:25:34.275 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:25:34.275 ==> default: -> value=-drive, 00:25:34.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:25:34.275 ==> default: -> value=-device, 00:25:34.275 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:34.275 ==> default: -> value=-device, 00:25:34.275 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:25:34.275 ==> default: -> value=-drive, 00:25:34.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:25:34.275 ==> default: -> value=-device, 00:25:34.275 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:34.275 ==> default: -> value=-drive, 00:25:34.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:25:34.275 ==> default: -> value=-device, 00:25:34.275 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:34.275 ==> default: -> value=-drive, 00:25:34.275 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:25:34.275 ==> default: -> value=-device, 00:25:34.275 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:25:34.534 ==> default: Creating shared folders metadata... 00:25:34.534 ==> default: Starting domain. 00:25:35.908 ==> default: Waiting for domain to get an IP address... 00:25:50.801 ==> default: Waiting for SSH to become available... 00:25:52.312 ==> default: Configuring and enabling network interfaces... 00:25:58.877 default: SSH address: 192.168.121.67:22 00:25:58.877 default: SSH username: vagrant 00:25:58.877 default: SSH auth method: private key 00:26:01.410 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:26:11.391 ==> default: Mounting SSHFS shared folder... 00:26:11.959 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:26:11.959 ==> default: Checking Mount.. 00:26:13.898 ==> default: Folder Successfully Mounted! 00:26:13.898 ==> default: Running provisioner: file... 00:26:14.835 default: ~/.gitconfig => .gitconfig 00:26:15.404 00:26:15.404 SUCCESS! 00:26:15.404 00:26:15.404 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:26:15.404 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:26:15.404 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:26:15.404 00:26:15.413 [Pipeline] } 00:26:15.429 [Pipeline] // stage 00:26:15.439 [Pipeline] dir 00:26:15.439 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:26:15.441 [Pipeline] { 00:26:15.454 [Pipeline] catchError 00:26:15.455 [Pipeline] { 00:26:15.467 [Pipeline] sh 00:26:15.748 + vagrant ssh-config --host vagrant 00:26:15.748 + sed -ne /^Host/,$p 00:26:15.748 + tee ssh_conf 00:26:19.037 Host vagrant 00:26:19.037 HostName 192.168.121.67 00:26:19.037 User vagrant 00:26:19.037 Port 22 00:26:19.037 UserKnownHostsFile /dev/null 00:26:19.037 StrictHostKeyChecking no 00:26:19.037 PasswordAuthentication no 00:26:19.037 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:26:19.037 IdentitiesOnly yes 00:26:19.037 LogLevel FATAL 00:26:19.037 ForwardAgent yes 00:26:19.037 ForwardX11 yes 00:26:19.037 00:26:19.052 [Pipeline] withEnv 00:26:19.054 [Pipeline] { 00:26:19.072 [Pipeline] sh 00:26:19.353 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:26:19.353 source /etc/os-release 00:26:19.353 [[ -e /image.version ]] && img=$(< /image.version) 00:26:19.353 # Minimal, systemd-like check. 00:26:19.353 if [[ -e /.dockerenv ]]; then 00:26:19.353 # Clear garbage from the node's name: 00:26:19.353 # agt-er_autotest_547-896 -> autotest_547-896 00:26:19.353 # $HOSTNAME is the actual container id 00:26:19.353 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:26:19.353 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:26:19.353 # We can assume this is a mount from a host where container is running, 00:26:19.353 # so fetch its hostname to easily identify the target swarm worker. 00:26:19.353 container="$(< /etc/hostname) ($agent)" 00:26:19.353 else 00:26:19.353 # Fallback 00:26:19.353 container=$agent 00:26:19.353 fi 00:26:19.353 fi 00:26:19.353 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:26:19.353 00:26:19.364 [Pipeline] } 00:26:19.381 [Pipeline] // withEnv 00:26:19.391 [Pipeline] setCustomBuildProperty 00:26:19.406 [Pipeline] stage 00:26:19.408 [Pipeline] { (Tests) 00:26:19.426 [Pipeline] sh 00:26:19.781 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:26:19.792 [Pipeline] sh 00:26:20.071 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:26:20.087 [Pipeline] timeout 00:26:20.087 Timeout set to expire in 1 hr 30 min 00:26:20.089 [Pipeline] { 00:26:20.104 [Pipeline] sh 00:26:20.382 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:26:20.952 HEAD is now at c86e5b182 bdev/malloc: Extract internal of verify_pi() for code reuse 00:26:20.961 [Pipeline] sh 00:26:21.272 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:26:21.285 [Pipeline] sh 00:26:21.571 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:26:21.846 [Pipeline] sh 00:26:22.132 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:26:22.391 ++ readlink -f spdk_repo 00:26:22.391 + DIR_ROOT=/home/vagrant/spdk_repo 00:26:22.391 + [[ -n /home/vagrant/spdk_repo ]] 00:26:22.391 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:26:22.391 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:26:22.391 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:26:22.391 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:26:22.391 + [[ -d /home/vagrant/spdk_repo/output ]] 00:26:22.391 + [[ raid-vg-autotest == pkgdep-* ]] 00:26:22.391 + cd /home/vagrant/spdk_repo 00:26:22.391 + source /etc/os-release 00:26:22.391 ++ NAME='Fedora Linux' 00:26:22.391 ++ VERSION='39 (Cloud Edition)' 00:26:22.391 ++ ID=fedora 00:26:22.391 ++ VERSION_ID=39 00:26:22.391 ++ VERSION_CODENAME= 00:26:22.391 ++ PLATFORM_ID=platform:f39 00:26:22.391 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:26:22.391 ++ ANSI_COLOR='0;38;2;60;110;180' 00:26:22.391 ++ LOGO=fedora-logo-icon 00:26:22.391 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:26:22.391 ++ HOME_URL=https://fedoraproject.org/ 00:26:22.391 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:26:22.391 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:26:22.391 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:26:22.391 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:26:22.391 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:26:22.391 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:26:22.391 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:26:22.391 ++ SUPPORT_END=2024-11-12 00:26:22.391 ++ VARIANT='Cloud Edition' 00:26:22.391 ++ VARIANT_ID=cloud 00:26:22.391 + uname -a 00:26:22.391 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:26:22.391 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:26:22.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:22.958 Hugepages 00:26:22.958 node hugesize free / total 00:26:22.958 node0 1048576kB 0 / 0 00:26:22.958 node0 2048kB 0 / 0 00:26:22.958 00:26:22.958 Type BDF Vendor Device NUMA Driver Device Block devices 00:26:22.958 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:26:22.958 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:26:22.958 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:26:23.218 + rm -f /tmp/spdk-ld-path 00:26:23.218 + source autorun-spdk.conf 00:26:23.218 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:26:23.218 ++ SPDK_RUN_ASAN=1 00:26:23.218 ++ SPDK_RUN_UBSAN=1 00:26:23.218 ++ SPDK_TEST_RAID=1 00:26:23.218 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:26:23.218 ++ RUN_NIGHTLY=0 00:26:23.218 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:26:23.218 + [[ -n '' ]] 00:26:23.218 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:26:23.218 + for M in /var/spdk/build-*-manifest.txt 00:26:23.218 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:26:23.218 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:26:23.218 + for M in /var/spdk/build-*-manifest.txt 00:26:23.218 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:26:23.218 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:26:23.218 + for M in /var/spdk/build-*-manifest.txt 00:26:23.218 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:26:23.218 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:26:23.218 ++ uname 00:26:23.218 + [[ Linux == \L\i\n\u\x ]] 00:26:23.218 + sudo dmesg -T 00:26:23.218 + sudo dmesg --clear 00:26:23.218 + dmesg_pid=5434 00:26:23.218 + [[ Fedora Linux == FreeBSD ]] 00:26:23.218 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:23.218 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:23.218 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:26:23.219 + [[ -x /usr/src/fio-static/fio ]] 00:26:23.219 + sudo dmesg -Tw 00:26:23.219 + export FIO_BIN=/usr/src/fio-static/fio 00:26:23.219 + FIO_BIN=/usr/src/fio-static/fio 00:26:23.219 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:26:23.219 + [[ ! -v VFIO_QEMU_BIN ]] 00:26:23.219 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:26:23.219 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:23.219 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:23.219 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:26:23.219 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:23.219 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:23.219 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:26:23.479 17:23:23 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:26:23.479 17:23:23 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:26:23.479 17:23:23 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:26:23.479 17:23:23 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:26:23.479 17:23:23 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:26:23.479 17:23:23 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:26:23.479 17:23:23 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:26:23.479 17:23:23 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:26:23.479 17:23:23 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:26:23.479 17:23:23 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:26:23.479 17:23:23 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:26:23.479 17:23:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:23.479 17:23:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:26:23.479 17:23:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:23.479 17:23:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.479 17:23:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.479 17:23:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.479 17:23:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.480 17:23:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.480 17:23:24 -- paths/export.sh@5 -- $ export PATH 00:26:23.480 17:23:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.480 17:23:24 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:23.480 17:23:24 -- common/autobuild_common.sh@493 -- $ date +%s 00:26:23.480 17:23:24 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732641804.XXXXXX 00:26:23.480 17:23:24 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732641804.REOcdq 00:26:23.480 17:23:24 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:26:23.480 17:23:24 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:26:23.480 17:23:24 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:26:23.480 17:23:24 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:23.480 17:23:24 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:23.480 17:23:24 -- common/autobuild_common.sh@509 -- $ get_config_params 00:26:23.480 17:23:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:26:23.480 17:23:24 -- common/autotest_common.sh@10 -- $ set +x 00:26:23.480 17:23:24 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:26:23.480 17:23:24 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:26:23.480 17:23:24 -- pm/common@17 -- $ local monitor 00:26:23.480 17:23:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:23.480 17:23:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:23.480 17:23:24 -- pm/common@25 -- $ sleep 1 00:26:23.480 17:23:24 -- pm/common@21 -- $ date +%s 00:26:23.480 17:23:24 -- pm/common@21 -- $ date +%s 00:26:23.480 17:23:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732641804 00:26:23.480 17:23:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732641804 00:26:23.480 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732641804_collect-vmstat.pm.log 00:26:23.480 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732641804_collect-cpu-load.pm.log 00:26:24.417 17:23:25 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:26:24.417 17:23:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:26:24.417 17:23:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:26:24.417 17:23:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:26:24.417 17:23:25 -- spdk/autobuild.sh@16 -- $ date -u 00:26:24.417 Tue Nov 26 05:23:25 PM UTC 2024 00:26:24.417 17:23:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:26:24.417 v25.01-pre-264-gc86e5b182 00:26:24.417 17:23:25 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:26:24.417 17:23:25 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:26:24.417 17:23:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:26:24.417 17:23:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:26:24.417 17:23:25 -- common/autotest_common.sh@10 -- $ set +x 00:26:24.417 ************************************ 00:26:24.417 START TEST asan 00:26:24.417 ************************************ 00:26:24.417 using asan 00:26:24.417 17:23:25 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:26:24.417 00:26:24.417 real 0m0.000s 00:26:24.417 user 0m0.000s 00:26:24.417 sys 0m0.000s 00:26:24.417 17:23:25 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:26:24.417 17:23:25 asan -- common/autotest_common.sh@10 -- $ set +x 00:26:24.417 ************************************ 00:26:24.417 END TEST asan 00:26:24.417 ************************************ 00:26:24.676 17:23:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:26:24.676 17:23:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:26:24.676 17:23:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:26:24.676 17:23:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:26:24.676 17:23:25 -- common/autotest_common.sh@10 -- $ set +x 00:26:24.676 ************************************ 00:26:24.676 START TEST ubsan 00:26:24.676 ************************************ 00:26:24.676 using ubsan 00:26:24.676 17:23:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:26:24.676 00:26:24.676 real 0m0.000s 00:26:24.676 user 0m0.000s 00:26:24.676 sys 0m0.000s 00:26:24.676 17:23:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:26:24.676 ************************************ 00:26:24.676 END TEST ubsan 00:26:24.676 ************************************ 00:26:24.676 17:23:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:26:24.676 17:23:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:26:24.676 17:23:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:26:24.676 17:23:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:26:24.676 17:23:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:26:24.676 17:23:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:26:24.676 17:23:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:26:24.676 17:23:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:26:24.676 17:23:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:26:24.676 17:23:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:26:24.676 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:24.676 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:25.246 Using 'verbs' RDMA provider 00:26:41.124 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:26:56.086 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:26:56.345 Creating mk/config.mk...done. 00:26:56.345 Creating mk/cc.flags.mk...done. 00:26:56.345 Type 'make' to build. 00:26:56.345 17:23:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:26:56.345 17:23:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:26:56.345 17:23:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:26:56.345 17:23:56 -- common/autotest_common.sh@10 -- $ set +x 00:26:56.345 ************************************ 00:26:56.345 START TEST make 00:26:56.345 ************************************ 00:26:56.345 17:23:56 make -- common/autotest_common.sh@1129 -- $ make -j10 00:26:56.912 make[1]: Nothing to be done for 'all'. 00:27:09.129 The Meson build system 00:27:09.129 Version: 1.5.0 00:27:09.129 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:27:09.129 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:27:09.129 Build type: native build 00:27:09.129 Program cat found: YES (/usr/bin/cat) 00:27:09.129 Project name: DPDK 00:27:09.129 Project version: 24.03.0 00:27:09.129 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:27:09.129 C linker for the host machine: cc ld.bfd 2.40-14 00:27:09.129 Host machine cpu family: x86_64 00:27:09.129 Host machine cpu: x86_64 00:27:09.129 Message: ## Building in Developer Mode ## 00:27:09.129 Program pkg-config found: YES (/usr/bin/pkg-config) 00:27:09.129 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:27:09.129 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:27:09.129 Program python3 found: YES (/usr/bin/python3) 00:27:09.129 Program cat found: YES (/usr/bin/cat) 00:27:09.129 Compiler for C supports arguments -march=native: YES 00:27:09.129 Checking for size of "void *" : 8 00:27:09.129 Checking for size of "void *" : 8 (cached) 00:27:09.129 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:27:09.129 Library m found: YES 00:27:09.129 Library numa found: YES 00:27:09.129 Has header "numaif.h" : YES 00:27:09.129 Library fdt found: NO 00:27:09.129 Library execinfo found: NO 00:27:09.129 Has header "execinfo.h" : YES 00:27:09.129 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:27:09.129 Run-time dependency libarchive found: NO (tried pkgconfig) 00:27:09.129 Run-time dependency libbsd found: NO (tried pkgconfig) 00:27:09.129 Run-time dependency jansson found: NO (tried pkgconfig) 00:27:09.129 Run-time dependency openssl found: YES 3.1.1 00:27:09.129 Run-time dependency libpcap found: YES 1.10.4 00:27:09.129 Has header "pcap.h" with dependency libpcap: YES 00:27:09.129 Compiler for C supports arguments -Wcast-qual: YES 00:27:09.129 Compiler for C supports arguments -Wdeprecated: YES 00:27:09.129 Compiler for C supports arguments -Wformat: YES 00:27:09.129 Compiler for C supports arguments -Wformat-nonliteral: NO 00:27:09.129 Compiler for C supports arguments -Wformat-security: NO 00:27:09.129 Compiler for C supports arguments -Wmissing-declarations: YES 00:27:09.129 Compiler for C supports arguments -Wmissing-prototypes: YES 00:27:09.129 Compiler for C supports arguments -Wnested-externs: YES 00:27:09.129 Compiler for C supports arguments -Wold-style-definition: YES 00:27:09.129 Compiler for C supports arguments -Wpointer-arith: YES 00:27:09.129 Compiler for C supports arguments -Wsign-compare: YES 00:27:09.129 Compiler for C supports arguments -Wstrict-prototypes: YES 00:27:09.129 Compiler for C supports arguments -Wundef: YES 00:27:09.129 Compiler for C supports arguments -Wwrite-strings: YES 00:27:09.129 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:27:09.129 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:27:09.129 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:27:09.129 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:27:09.129 Program objdump found: YES (/usr/bin/objdump) 00:27:09.129 Compiler for C supports arguments -mavx512f: YES 00:27:09.129 Checking if "AVX512 checking" compiles: YES 00:27:09.129 Fetching value of define "__SSE4_2__" : 1 00:27:09.129 Fetching value of define "__AES__" : 1 00:27:09.129 Fetching value of define "__AVX__" : 1 00:27:09.129 Fetching value of define "__AVX2__" : 1 00:27:09.129 Fetching value of define "__AVX512BW__" : 1 00:27:09.129 Fetching value of define "__AVX512CD__" : 1 00:27:09.129 Fetching value of define "__AVX512DQ__" : 1 00:27:09.129 Fetching value of define "__AVX512F__" : 1 00:27:09.129 Fetching value of define "__AVX512VL__" : 1 00:27:09.129 Fetching value of define "__PCLMUL__" : 1 00:27:09.129 Fetching value of define "__RDRND__" : 1 00:27:09.129 Fetching value of define "__RDSEED__" : 1 00:27:09.129 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:27:09.129 Fetching value of define "__znver1__" : (undefined) 00:27:09.129 Fetching value of define "__znver2__" : (undefined) 00:27:09.129 Fetching value of define "__znver3__" : (undefined) 00:27:09.129 Fetching value of define "__znver4__" : (undefined) 00:27:09.129 Library asan found: YES 00:27:09.129 Compiler for C supports arguments -Wno-format-truncation: YES 00:27:09.129 Message: lib/log: Defining dependency "log" 00:27:09.129 Message: lib/kvargs: Defining dependency "kvargs" 00:27:09.129 Message: lib/telemetry: Defining dependency "telemetry" 00:27:09.129 Library rt found: YES 00:27:09.129 Checking for function "getentropy" : NO 00:27:09.129 Message: lib/eal: Defining dependency "eal" 00:27:09.129 Message: lib/ring: Defining dependency "ring" 00:27:09.129 Message: lib/rcu: Defining dependency "rcu" 00:27:09.129 Message: lib/mempool: Defining dependency "mempool" 00:27:09.129 Message: lib/mbuf: Defining dependency "mbuf" 00:27:09.129 Fetching value of define "__PCLMUL__" : 1 (cached) 00:27:09.129 Fetching value of define "__AVX512F__" : 1 (cached) 00:27:09.129 Fetching value of define "__AVX512BW__" : 1 (cached) 00:27:09.129 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:27:09.129 Fetching value of define "__AVX512VL__" : 1 (cached) 00:27:09.129 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:27:09.129 Compiler for C supports arguments -mpclmul: YES 00:27:09.129 Compiler for C supports arguments -maes: YES 00:27:09.130 Compiler for C supports arguments -mavx512f: YES (cached) 00:27:09.130 Compiler for C supports arguments -mavx512bw: YES 00:27:09.130 Compiler for C supports arguments -mavx512dq: YES 00:27:09.130 Compiler for C supports arguments -mavx512vl: YES 00:27:09.130 Compiler for C supports arguments -mvpclmulqdq: YES 00:27:09.130 Compiler for C supports arguments -mavx2: YES 00:27:09.130 Compiler for C supports arguments -mavx: YES 00:27:09.130 Message: lib/net: Defining dependency "net" 00:27:09.130 Message: lib/meter: Defining dependency "meter" 00:27:09.130 Message: lib/ethdev: Defining dependency "ethdev" 00:27:09.130 Message: lib/pci: Defining dependency "pci" 00:27:09.130 Message: lib/cmdline: Defining dependency "cmdline" 00:27:09.130 Message: lib/hash: Defining dependency "hash" 00:27:09.130 Message: lib/timer: Defining dependency "timer" 00:27:09.130 Message: lib/compressdev: Defining dependency "compressdev" 00:27:09.130 Message: lib/cryptodev: Defining dependency "cryptodev" 00:27:09.130 Message: lib/dmadev: Defining dependency "dmadev" 00:27:09.130 Compiler for C supports arguments -Wno-cast-qual: YES 00:27:09.130 Message: lib/power: Defining dependency "power" 00:27:09.130 Message: lib/reorder: Defining dependency "reorder" 00:27:09.130 Message: lib/security: Defining dependency "security" 00:27:09.130 Has header "linux/userfaultfd.h" : YES 00:27:09.130 Has header "linux/vduse.h" : YES 00:27:09.130 Message: lib/vhost: Defining dependency "vhost" 00:27:09.130 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:27:09.130 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:27:09.130 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:27:09.130 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:27:09.130 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:27:09.130 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:27:09.130 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:27:09.130 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:27:09.130 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:27:09.130 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:27:09.130 Program doxygen found: YES (/usr/local/bin/doxygen) 00:27:09.130 Configuring doxy-api-html.conf using configuration 00:27:09.130 Configuring doxy-api-man.conf using configuration 00:27:09.130 Program mandb found: YES (/usr/bin/mandb) 00:27:09.130 Program sphinx-build found: NO 00:27:09.130 Configuring rte_build_config.h using configuration 00:27:09.130 Message: 00:27:09.130 ================= 00:27:09.130 Applications Enabled 00:27:09.130 ================= 00:27:09.130 00:27:09.130 apps: 00:27:09.130 00:27:09.130 00:27:09.130 Message: 00:27:09.130 ================= 00:27:09.130 Libraries Enabled 00:27:09.130 ================= 00:27:09.130 00:27:09.130 libs: 00:27:09.130 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:27:09.130 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:27:09.130 cryptodev, dmadev, power, reorder, security, vhost, 00:27:09.130 00:27:09.130 Message: 00:27:09.130 =============== 00:27:09.130 Drivers Enabled 00:27:09.130 =============== 00:27:09.130 00:27:09.130 common: 00:27:09.130 00:27:09.130 bus: 00:27:09.130 pci, vdev, 00:27:09.130 mempool: 00:27:09.130 ring, 00:27:09.130 dma: 00:27:09.130 00:27:09.130 net: 00:27:09.130 00:27:09.130 crypto: 00:27:09.130 00:27:09.130 compress: 00:27:09.130 00:27:09.130 vdpa: 00:27:09.130 00:27:09.130 00:27:09.130 Message: 00:27:09.130 ================= 00:27:09.130 Content Skipped 00:27:09.130 ================= 00:27:09.130 00:27:09.130 apps: 00:27:09.130 dumpcap: explicitly disabled via build config 00:27:09.130 graph: explicitly disabled via build config 00:27:09.130 pdump: explicitly disabled via build config 00:27:09.130 proc-info: explicitly disabled via build config 00:27:09.130 test-acl: explicitly disabled via build config 00:27:09.130 test-bbdev: explicitly disabled via build config 00:27:09.130 test-cmdline: explicitly disabled via build config 00:27:09.130 test-compress-perf: explicitly disabled via build config 00:27:09.130 test-crypto-perf: explicitly disabled via build config 00:27:09.130 test-dma-perf: explicitly disabled via build config 00:27:09.130 test-eventdev: explicitly disabled via build config 00:27:09.130 test-fib: explicitly disabled via build config 00:27:09.130 test-flow-perf: explicitly disabled via build config 00:27:09.130 test-gpudev: explicitly disabled via build config 00:27:09.130 test-mldev: explicitly disabled via build config 00:27:09.130 test-pipeline: explicitly disabled via build config 00:27:09.130 test-pmd: explicitly disabled via build config 00:27:09.130 test-regex: explicitly disabled via build config 00:27:09.130 test-sad: explicitly disabled via build config 00:27:09.130 test-security-perf: explicitly disabled via build config 00:27:09.130 00:27:09.130 libs: 00:27:09.130 argparse: explicitly disabled via build config 00:27:09.130 metrics: explicitly disabled via build config 00:27:09.130 acl: explicitly disabled via build config 00:27:09.130 bbdev: explicitly disabled via build config 00:27:09.130 bitratestats: explicitly disabled via build config 00:27:09.130 bpf: explicitly disabled via build config 00:27:09.130 cfgfile: explicitly disabled via build config 00:27:09.130 distributor: explicitly disabled via build config 00:27:09.130 efd: explicitly disabled via build config 00:27:09.130 eventdev: explicitly disabled via build config 00:27:09.130 dispatcher: explicitly disabled via build config 00:27:09.130 gpudev: explicitly disabled via build config 00:27:09.130 gro: explicitly disabled via build config 00:27:09.130 gso: explicitly disabled via build config 00:27:09.130 ip_frag: explicitly disabled via build config 00:27:09.130 jobstats: explicitly disabled via build config 00:27:09.130 latencystats: explicitly disabled via build config 00:27:09.130 lpm: explicitly disabled via build config 00:27:09.130 member: explicitly disabled via build config 00:27:09.130 pcapng: explicitly disabled via build config 00:27:09.130 rawdev: explicitly disabled via build config 00:27:09.130 regexdev: explicitly disabled via build config 00:27:09.130 mldev: explicitly disabled via build config 00:27:09.130 rib: explicitly disabled via build config 00:27:09.130 sched: explicitly disabled via build config 00:27:09.130 stack: explicitly disabled via build config 00:27:09.130 ipsec: explicitly disabled via build config 00:27:09.130 pdcp: explicitly disabled via build config 00:27:09.130 fib: explicitly disabled via build config 00:27:09.130 port: explicitly disabled via build config 00:27:09.130 pdump: explicitly disabled via build config 00:27:09.130 table: explicitly disabled via build config 00:27:09.130 pipeline: explicitly disabled via build config 00:27:09.130 graph: explicitly disabled via build config 00:27:09.130 node: explicitly disabled via build config 00:27:09.130 00:27:09.130 drivers: 00:27:09.130 common/cpt: not in enabled drivers build config 00:27:09.130 common/dpaax: not in enabled drivers build config 00:27:09.130 common/iavf: not in enabled drivers build config 00:27:09.130 common/idpf: not in enabled drivers build config 00:27:09.130 common/ionic: not in enabled drivers build config 00:27:09.130 common/mvep: not in enabled drivers build config 00:27:09.130 common/octeontx: not in enabled drivers build config 00:27:09.130 bus/auxiliary: not in enabled drivers build config 00:27:09.130 bus/cdx: not in enabled drivers build config 00:27:09.130 bus/dpaa: not in enabled drivers build config 00:27:09.130 bus/fslmc: not in enabled drivers build config 00:27:09.130 bus/ifpga: not in enabled drivers build config 00:27:09.130 bus/platform: not in enabled drivers build config 00:27:09.130 bus/uacce: not in enabled drivers build config 00:27:09.130 bus/vmbus: not in enabled drivers build config 00:27:09.130 common/cnxk: not in enabled drivers build config 00:27:09.130 common/mlx5: not in enabled drivers build config 00:27:09.130 common/nfp: not in enabled drivers build config 00:27:09.130 common/nitrox: not in enabled drivers build config 00:27:09.130 common/qat: not in enabled drivers build config 00:27:09.130 common/sfc_efx: not in enabled drivers build config 00:27:09.130 mempool/bucket: not in enabled drivers build config 00:27:09.130 mempool/cnxk: not in enabled drivers build config 00:27:09.130 mempool/dpaa: not in enabled drivers build config 00:27:09.130 mempool/dpaa2: not in enabled drivers build config 00:27:09.130 mempool/octeontx: not in enabled drivers build config 00:27:09.130 mempool/stack: not in enabled drivers build config 00:27:09.130 dma/cnxk: not in enabled drivers build config 00:27:09.130 dma/dpaa: not in enabled drivers build config 00:27:09.130 dma/dpaa2: not in enabled drivers build config 00:27:09.130 dma/hisilicon: not in enabled drivers build config 00:27:09.130 dma/idxd: not in enabled drivers build config 00:27:09.130 dma/ioat: not in enabled drivers build config 00:27:09.130 dma/skeleton: not in enabled drivers build config 00:27:09.130 net/af_packet: not in enabled drivers build config 00:27:09.130 net/af_xdp: not in enabled drivers build config 00:27:09.130 net/ark: not in enabled drivers build config 00:27:09.130 net/atlantic: not in enabled drivers build config 00:27:09.130 net/avp: not in enabled drivers build config 00:27:09.130 net/axgbe: not in enabled drivers build config 00:27:09.130 net/bnx2x: not in enabled drivers build config 00:27:09.130 net/bnxt: not in enabled drivers build config 00:27:09.130 net/bonding: not in enabled drivers build config 00:27:09.130 net/cnxk: not in enabled drivers build config 00:27:09.130 net/cpfl: not in enabled drivers build config 00:27:09.130 net/cxgbe: not in enabled drivers build config 00:27:09.130 net/dpaa: not in enabled drivers build config 00:27:09.130 net/dpaa2: not in enabled drivers build config 00:27:09.130 net/e1000: not in enabled drivers build config 00:27:09.130 net/ena: not in enabled drivers build config 00:27:09.130 net/enetc: not in enabled drivers build config 00:27:09.130 net/enetfec: not in enabled drivers build config 00:27:09.130 net/enic: not in enabled drivers build config 00:27:09.130 net/failsafe: not in enabled drivers build config 00:27:09.130 net/fm10k: not in enabled drivers build config 00:27:09.130 net/gve: not in enabled drivers build config 00:27:09.130 net/hinic: not in enabled drivers build config 00:27:09.130 net/hns3: not in enabled drivers build config 00:27:09.130 net/i40e: not in enabled drivers build config 00:27:09.130 net/iavf: not in enabled drivers build config 00:27:09.130 net/ice: not in enabled drivers build config 00:27:09.130 net/idpf: not in enabled drivers build config 00:27:09.130 net/igc: not in enabled drivers build config 00:27:09.130 net/ionic: not in enabled drivers build config 00:27:09.130 net/ipn3ke: not in enabled drivers build config 00:27:09.130 net/ixgbe: not in enabled drivers build config 00:27:09.130 net/mana: not in enabled drivers build config 00:27:09.130 net/memif: not in enabled drivers build config 00:27:09.130 net/mlx4: not in enabled drivers build config 00:27:09.130 net/mlx5: not in enabled drivers build config 00:27:09.130 net/mvneta: not in enabled drivers build config 00:27:09.130 net/mvpp2: not in enabled drivers build config 00:27:09.130 net/netvsc: not in enabled drivers build config 00:27:09.130 net/nfb: not in enabled drivers build config 00:27:09.130 net/nfp: not in enabled drivers build config 00:27:09.130 net/ngbe: not in enabled drivers build config 00:27:09.130 net/null: not in enabled drivers build config 00:27:09.130 net/octeontx: not in enabled drivers build config 00:27:09.130 net/octeon_ep: not in enabled drivers build config 00:27:09.130 net/pcap: not in enabled drivers build config 00:27:09.130 net/pfe: not in enabled drivers build config 00:27:09.130 net/qede: not in enabled drivers build config 00:27:09.130 net/ring: not in enabled drivers build config 00:27:09.130 net/sfc: not in enabled drivers build config 00:27:09.130 net/softnic: not in enabled drivers build config 00:27:09.130 net/tap: not in enabled drivers build config 00:27:09.130 net/thunderx: not in enabled drivers build config 00:27:09.130 net/txgbe: not in enabled drivers build config 00:27:09.130 net/vdev_netvsc: not in enabled drivers build config 00:27:09.130 net/vhost: not in enabled drivers build config 00:27:09.130 net/virtio: not in enabled drivers build config 00:27:09.130 net/vmxnet3: not in enabled drivers build config 00:27:09.130 raw/*: missing internal dependency, "rawdev" 00:27:09.130 crypto/armv8: not in enabled drivers build config 00:27:09.130 crypto/bcmfs: not in enabled drivers build config 00:27:09.130 crypto/caam_jr: not in enabled drivers build config 00:27:09.130 crypto/ccp: not in enabled drivers build config 00:27:09.130 crypto/cnxk: not in enabled drivers build config 00:27:09.130 crypto/dpaa_sec: not in enabled drivers build config 00:27:09.130 crypto/dpaa2_sec: not in enabled drivers build config 00:27:09.130 crypto/ipsec_mb: not in enabled drivers build config 00:27:09.130 crypto/mlx5: not in enabled drivers build config 00:27:09.130 crypto/mvsam: not in enabled drivers build config 00:27:09.130 crypto/nitrox: not in enabled drivers build config 00:27:09.130 crypto/null: not in enabled drivers build config 00:27:09.130 crypto/octeontx: not in enabled drivers build config 00:27:09.130 crypto/openssl: not in enabled drivers build config 00:27:09.130 crypto/scheduler: not in enabled drivers build config 00:27:09.130 crypto/uadk: not in enabled drivers build config 00:27:09.130 crypto/virtio: not in enabled drivers build config 00:27:09.130 compress/isal: not in enabled drivers build config 00:27:09.130 compress/mlx5: not in enabled drivers build config 00:27:09.130 compress/nitrox: not in enabled drivers build config 00:27:09.130 compress/octeontx: not in enabled drivers build config 00:27:09.130 compress/zlib: not in enabled drivers build config 00:27:09.130 regex/*: missing internal dependency, "regexdev" 00:27:09.130 ml/*: missing internal dependency, "mldev" 00:27:09.131 vdpa/ifc: not in enabled drivers build config 00:27:09.131 vdpa/mlx5: not in enabled drivers build config 00:27:09.131 vdpa/nfp: not in enabled drivers build config 00:27:09.131 vdpa/sfc: not in enabled drivers build config 00:27:09.131 event/*: missing internal dependency, "eventdev" 00:27:09.131 baseband/*: missing internal dependency, "bbdev" 00:27:09.131 gpu/*: missing internal dependency, "gpudev" 00:27:09.131 00:27:09.131 00:27:09.131 Build targets in project: 85 00:27:09.131 00:27:09.131 DPDK 24.03.0 00:27:09.131 00:27:09.131 User defined options 00:27:09.131 buildtype : debug 00:27:09.131 default_library : shared 00:27:09.131 libdir : lib 00:27:09.131 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:27:09.131 b_sanitize : address 00:27:09.131 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:27:09.131 c_link_args : 00:27:09.131 cpu_instruction_set: native 00:27:09.131 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:27:09.131 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:27:09.131 enable_docs : false 00:27:09.131 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:27:09.131 enable_kmods : false 00:27:09.131 max_lcores : 128 00:27:09.131 tests : false 00:27:09.131 00:27:09.131 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:27:09.131 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:27:09.131 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:27:09.131 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:27:09.131 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:27:09.131 [4/268] Linking static target lib/librte_log.a 00:27:09.131 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:27:09.131 [6/268] Linking static target lib/librte_kvargs.a 00:27:09.131 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:27:09.131 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:27:09.131 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:27:09.131 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:27:09.131 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:27:09.131 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:27:09.131 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:27:09.131 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:27:09.131 [15/268] Linking static target lib/librte_telemetry.a 00:27:09.131 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:27:09.131 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:27:09.396 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:27:09.396 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:27:09.655 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:27:09.655 [21/268] Linking target lib/librte_log.so.24.1 00:27:09.655 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:27:09.655 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:27:09.655 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:27:09.655 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:27:09.655 [26/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:27:09.914 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:27:09.914 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:27:09.914 [29/268] Linking target lib/librte_kvargs.so.24.1 00:27:09.914 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:27:09.914 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:27:09.914 [32/268] Linking target lib/librte_telemetry.so.24.1 00:27:09.914 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:27:10.172 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:27:10.172 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:27:10.172 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:27:10.172 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:27:10.172 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:27:10.429 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:27:10.429 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:27:10.429 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:27:10.429 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:27:10.430 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:27:10.430 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:27:10.688 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:27:10.688 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:27:10.688 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:27:10.947 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:27:10.947 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:27:10.947 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:27:10.947 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:27:11.206 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:27:11.206 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:27:11.206 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:27:11.206 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:27:11.206 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:27:11.464 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:27:11.464 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:27:11.464 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:27:11.464 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:27:11.464 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:27:11.722 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:27:11.722 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:27:11.722 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:27:11.722 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:27:11.722 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:27:11.981 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:27:11.981 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:27:11.981 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:27:12.241 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:27:12.241 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:27:12.241 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:27:12.241 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:27:12.241 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:27:12.241 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:27:12.241 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:27:12.500 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:27:12.500 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:27:12.500 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:27:12.500 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:27:12.759 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:27:12.759 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:27:12.759 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:27:13.019 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:27:13.019 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:27:13.019 [86/268] Linking static target lib/librte_ring.a 00:27:13.019 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:27:13.019 [88/268] Linking static target lib/librte_eal.a 00:27:13.279 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:27:13.279 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:27:13.279 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:27:13.279 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:27:13.279 [93/268] Linking static target lib/librte_mempool.a 00:27:13.539 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:27:13.539 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:27:13.539 [96/268] Linking static target lib/librte_rcu.a 00:27:13.539 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:27:13.798 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:27:13.798 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:27:13.798 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:27:13.798 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:27:13.798 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:27:14.058 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:27:14.058 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:27:14.058 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:27:14.058 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:27:14.058 [107/268] Linking static target lib/librte_net.a 00:27:14.318 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:27:14.318 [109/268] Linking static target lib/librte_meter.a 00:27:14.318 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:27:14.318 [111/268] Linking static target lib/librte_mbuf.a 00:27:14.578 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:27:14.578 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:27:14.578 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:27:14.578 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:27:14.578 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:27:14.578 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:27:14.838 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:27:14.838 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:27:15.096 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:27:15.354 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:27:15.354 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:27:15.354 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:27:15.354 [124/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:27:15.613 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:27:15.613 [126/268] Linking static target lib/librte_pci.a 00:27:15.613 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:27:15.613 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:27:15.613 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:27:15.872 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:27:15.872 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:27:15.872 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:27:15.872 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:27:15.872 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:27:15.872 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:27:15.872 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:27:15.872 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:27:16.131 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:27:16.131 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:27:16.131 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:27:16.131 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:27:16.131 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:27:16.131 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:27:16.131 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:27:16.131 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:27:16.131 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:27:16.131 [147/268] Linking static target lib/librte_cmdline.a 00:27:16.390 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:27:16.390 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:27:16.648 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:27:16.648 [151/268] Linking static target lib/librte_timer.a 00:27:16.648 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:27:16.648 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:27:16.648 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:27:16.907 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:27:17.166 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:27:17.166 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:27:17.166 [158/268] Linking static target lib/librte_compressdev.a 00:27:17.166 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:27:17.166 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:27:17.166 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:27:17.166 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:27:17.166 [163/268] Linking static target lib/librte_ethdev.a 00:27:17.425 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:27:17.425 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:27:17.425 [166/268] Linking static target lib/librte_dmadev.a 00:27:17.425 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:27:17.684 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:27:17.684 [169/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:27:17.684 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:27:17.943 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:27:17.943 [172/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:27:17.943 [173/268] Linking static target lib/librte_hash.a 00:27:17.943 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:17.943 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:27:18.202 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:27:18.202 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:27:18.461 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:27:18.461 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:27:18.461 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:18.461 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:27:18.461 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:27:18.461 [183/268] Linking static target lib/librte_cryptodev.a 00:27:18.461 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:27:18.461 [185/268] Linking static target lib/librte_power.a 00:27:19.029 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:27:19.029 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:27:19.029 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:27:19.029 [189/268] Linking static target lib/librte_reorder.a 00:27:19.029 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:27:19.029 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:27:19.029 [192/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:27:19.029 [193/268] Linking static target lib/librte_security.a 00:27:19.598 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:27:19.598 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:27:19.856 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:27:19.856 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:27:20.115 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:27:20.115 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:27:20.115 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:27:20.374 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:27:20.374 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:27:20.655 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:27:20.655 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:27:20.655 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:27:20.655 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:27:20.915 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:20.915 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:27:20.915 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:27:20.915 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:27:20.915 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:27:21.175 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:27:21.175 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:27:21.175 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:27:21.175 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:27:21.175 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:27:21.175 [217/268] Linking static target drivers/librte_bus_pci.a 00:27:21.175 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:27:21.175 [219/268] Linking static target drivers/librte_bus_vdev.a 00:27:21.434 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:27:21.434 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:27:21.434 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:27:21.693 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:27:21.693 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:27:21.693 [225/268] Linking static target drivers/librte_mempool_ring.a 00:27:21.693 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:21.693 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:27:23.070 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:27:24.005 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:27:24.005 [230/268] Linking target lib/librte_eal.so.24.1 00:27:24.263 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:27:24.263 [232/268] Linking target lib/librte_ring.so.24.1 00:27:24.263 [233/268] Linking target lib/librte_meter.so.24.1 00:27:24.263 [234/268] Linking target lib/librte_dmadev.so.24.1 00:27:24.263 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:27:24.263 [236/268] Linking target lib/librte_pci.so.24.1 00:27:24.263 [237/268] Linking target lib/librte_timer.so.24.1 00:27:24.522 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:27:24.522 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:27:24.522 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:27:24.522 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:27:24.522 [242/268] Linking target lib/librte_rcu.so.24.1 00:27:24.522 [243/268] Linking target lib/librte_mempool.so.24.1 00:27:24.522 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:27:24.522 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:27:24.522 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:27:24.780 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:27:24.780 [248/268] Linking target lib/librte_mbuf.so.24.1 00:27:24.780 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:27:24.780 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:27:25.038 [251/268] Linking target lib/librte_compressdev.so.24.1 00:27:25.038 [252/268] Linking target lib/librte_net.so.24.1 00:27:25.038 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:27:25.038 [254/268] Linking target lib/librte_reorder.so.24.1 00:27:25.038 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:27:25.038 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:27:25.038 [257/268] Linking target lib/librte_hash.so.24.1 00:27:25.038 [258/268] Linking target lib/librte_cmdline.so.24.1 00:27:25.038 [259/268] Linking target lib/librte_security.so.24.1 00:27:25.297 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:27:26.234 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:27:26.493 [262/268] Linking target lib/librte_ethdev.so.24.1 00:27:26.493 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:27:26.752 [264/268] Linking target lib/librte_power.so.24.1 00:27:27.010 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:27:27.010 [266/268] Linking static target lib/librte_vhost.a 00:27:29.545 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:27:29.545 [268/268] Linking target lib/librte_vhost.so.24.1 00:27:29.545 INFO: autodetecting backend as ninja 00:27:29.545 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:27:51.493 CC lib/log/log.o 00:27:51.493 CC lib/log/log_flags.o 00:27:51.493 CC lib/log/log_deprecated.o 00:27:51.493 CC lib/ut/ut.o 00:27:51.493 CC lib/ut_mock/mock.o 00:27:51.493 LIB libspdk_log.a 00:27:51.493 LIB libspdk_ut.a 00:27:51.493 LIB libspdk_ut_mock.a 00:27:51.493 SO libspdk_log.so.7.1 00:27:51.493 SO libspdk_ut.so.2.0 00:27:51.493 SO libspdk_ut_mock.so.6.0 00:27:51.493 SYMLINK libspdk_log.so 00:27:51.493 SYMLINK libspdk_ut_mock.so 00:27:51.493 SYMLINK libspdk_ut.so 00:27:51.493 CC lib/ioat/ioat.o 00:27:51.493 CC lib/util/base64.o 00:27:51.493 CC lib/util/bit_array.o 00:27:51.493 CC lib/util/crc32.o 00:27:51.493 CC lib/util/crc32c.o 00:27:51.493 CC lib/util/cpuset.o 00:27:51.493 CC lib/util/crc16.o 00:27:51.493 CC lib/dma/dma.o 00:27:51.493 CXX lib/trace_parser/trace.o 00:27:51.493 CC lib/vfio_user/host/vfio_user_pci.o 00:27:51.493 CC lib/util/crc32_ieee.o 00:27:51.493 CC lib/util/crc64.o 00:27:51.493 CC lib/util/dif.o 00:27:51.493 CC lib/util/fd.o 00:27:51.493 LIB libspdk_dma.a 00:27:51.493 CC lib/util/fd_group.o 00:27:51.493 SO libspdk_dma.so.5.0 00:27:51.493 CC lib/vfio_user/host/vfio_user.o 00:27:51.493 CC lib/util/file.o 00:27:51.493 CC lib/util/hexlify.o 00:27:51.493 SYMLINK libspdk_dma.so 00:27:51.493 CC lib/util/iov.o 00:27:51.493 LIB libspdk_ioat.a 00:27:51.493 SO libspdk_ioat.so.7.0 00:27:51.493 CC lib/util/math.o 00:27:51.493 CC lib/util/net.o 00:27:51.493 SYMLINK libspdk_ioat.so 00:27:51.493 CC lib/util/pipe.o 00:27:51.493 CC lib/util/strerror_tls.o 00:27:51.493 CC lib/util/string.o 00:27:51.493 LIB libspdk_vfio_user.a 00:27:51.493 SO libspdk_vfio_user.so.5.0 00:27:51.493 CC lib/util/uuid.o 00:27:51.493 CC lib/util/xor.o 00:27:51.493 CC lib/util/zipf.o 00:27:51.493 CC lib/util/md5.o 00:27:51.493 SYMLINK libspdk_vfio_user.so 00:27:51.493 LIB libspdk_util.a 00:27:51.493 SO libspdk_util.so.10.1 00:27:51.493 LIB libspdk_trace_parser.a 00:27:51.493 SO libspdk_trace_parser.so.6.0 00:27:51.493 SYMLINK libspdk_util.so 00:27:51.493 SYMLINK libspdk_trace_parser.so 00:27:51.493 CC lib/rdma_utils/rdma_utils.o 00:27:51.493 CC lib/idxd/idxd_kernel.o 00:27:51.493 CC lib/idxd/idxd_user.o 00:27:51.493 CC lib/idxd/idxd.o 00:27:51.493 CC lib/env_dpdk/env.o 00:27:51.493 CC lib/env_dpdk/memory.o 00:27:51.493 CC lib/env_dpdk/pci.o 00:27:51.493 CC lib/conf/conf.o 00:27:51.493 CC lib/vmd/vmd.o 00:27:51.493 CC lib/json/json_parse.o 00:27:51.493 LIB libspdk_conf.a 00:27:51.493 SO libspdk_conf.so.6.0 00:27:51.493 LIB libspdk_rdma_utils.a 00:27:51.493 CC lib/env_dpdk/init.o 00:27:51.493 CC lib/json/json_util.o 00:27:51.493 SO libspdk_rdma_utils.so.1.0 00:27:51.493 SYMLINK libspdk_conf.so 00:27:51.493 CC lib/vmd/led.o 00:27:51.493 CC lib/env_dpdk/threads.o 00:27:51.493 SYMLINK libspdk_rdma_utils.so 00:27:51.493 CC lib/env_dpdk/pci_ioat.o 00:27:51.493 CC lib/env_dpdk/pci_virtio.o 00:27:51.493 CC lib/json/json_write.o 00:27:51.493 CC lib/env_dpdk/pci_vmd.o 00:27:51.493 CC lib/env_dpdk/pci_idxd.o 00:27:51.493 CC lib/env_dpdk/pci_event.o 00:27:51.493 CC lib/env_dpdk/sigbus_handler.o 00:27:51.493 CC lib/env_dpdk/pci_dpdk.o 00:27:51.493 CC lib/env_dpdk/pci_dpdk_2207.o 00:27:51.493 CC lib/rdma_provider/common.o 00:27:51.493 CC lib/env_dpdk/pci_dpdk_2211.o 00:27:51.493 LIB libspdk_idxd.a 00:27:51.493 LIB libspdk_json.a 00:27:51.493 SO libspdk_idxd.so.12.1 00:27:51.493 CC lib/rdma_provider/rdma_provider_verbs.o 00:27:51.493 LIB libspdk_vmd.a 00:27:51.493 SO libspdk_json.so.6.0 00:27:51.493 SO libspdk_vmd.so.6.0 00:27:51.493 SYMLINK libspdk_idxd.so 00:27:51.493 SYMLINK libspdk_json.so 00:27:51.493 SYMLINK libspdk_vmd.so 00:27:51.493 LIB libspdk_rdma_provider.a 00:27:51.493 SO libspdk_rdma_provider.so.7.0 00:27:51.493 SYMLINK libspdk_rdma_provider.so 00:27:51.493 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:27:51.493 CC lib/jsonrpc/jsonrpc_server.o 00:27:51.493 CC lib/jsonrpc/jsonrpc_client.o 00:27:51.493 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:27:51.753 LIB libspdk_jsonrpc.a 00:27:51.753 SO libspdk_jsonrpc.so.6.0 00:27:51.753 SYMLINK libspdk_jsonrpc.so 00:27:51.753 LIB libspdk_env_dpdk.a 00:27:52.013 SO libspdk_env_dpdk.so.15.1 00:27:52.013 SYMLINK libspdk_env_dpdk.so 00:27:52.272 CC lib/rpc/rpc.o 00:27:52.530 LIB libspdk_rpc.a 00:27:52.530 SO libspdk_rpc.so.6.0 00:27:52.530 SYMLINK libspdk_rpc.so 00:27:53.106 CC lib/notify/notify.o 00:27:53.106 CC lib/notify/notify_rpc.o 00:27:53.106 CC lib/trace/trace.o 00:27:53.106 CC lib/trace/trace_rpc.o 00:27:53.106 CC lib/trace/trace_flags.o 00:27:53.106 CC lib/keyring/keyring_rpc.o 00:27:53.106 CC lib/keyring/keyring.o 00:27:53.106 LIB libspdk_notify.a 00:27:53.106 SO libspdk_notify.so.6.0 00:27:53.106 SYMLINK libspdk_notify.so 00:27:53.381 LIB libspdk_trace.a 00:27:53.381 LIB libspdk_keyring.a 00:27:53.381 SO libspdk_keyring.so.2.0 00:27:53.381 SO libspdk_trace.so.11.0 00:27:53.381 SYMLINK libspdk_keyring.so 00:27:53.381 SYMLINK libspdk_trace.so 00:27:53.641 CC lib/sock/sock.o 00:27:53.641 CC lib/sock/sock_rpc.o 00:27:53.900 CC lib/thread/iobuf.o 00:27:53.900 CC lib/thread/thread.o 00:27:54.159 LIB libspdk_sock.a 00:27:54.418 SO libspdk_sock.so.10.0 00:27:54.418 SYMLINK libspdk_sock.so 00:27:54.678 CC lib/nvme/nvme_ctrlr_cmd.o 00:27:54.678 CC lib/nvme/nvme_ctrlr.o 00:27:54.678 CC lib/nvme/nvme_fabric.o 00:27:54.678 CC lib/nvme/nvme_ns_cmd.o 00:27:54.678 CC lib/nvme/nvme_ns.o 00:27:54.678 CC lib/nvme/nvme_pcie_common.o 00:27:54.678 CC lib/nvme/nvme_pcie.o 00:27:54.678 CC lib/nvme/nvme.o 00:27:54.678 CC lib/nvme/nvme_qpair.o 00:27:55.617 CC lib/nvme/nvme_quirks.o 00:27:55.617 LIB libspdk_thread.a 00:27:55.617 CC lib/nvme/nvme_transport.o 00:27:55.617 CC lib/nvme/nvme_discovery.o 00:27:55.617 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:27:55.617 SO libspdk_thread.so.11.0 00:27:55.617 SYMLINK libspdk_thread.so 00:27:55.617 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:27:55.876 CC lib/nvme/nvme_tcp.o 00:27:55.876 CC lib/nvme/nvme_opal.o 00:27:55.876 CC lib/nvme/nvme_io_msg.o 00:27:55.876 CC lib/nvme/nvme_poll_group.o 00:27:56.135 CC lib/nvme/nvme_zns.o 00:27:56.135 CC lib/nvme/nvme_stubs.o 00:27:56.393 CC lib/nvme/nvme_auth.o 00:27:56.393 CC lib/nvme/nvme_cuse.o 00:27:56.393 CC lib/nvme/nvme_rdma.o 00:27:56.652 CC lib/accel/accel.o 00:27:56.652 CC lib/accel/accel_rpc.o 00:27:56.910 CC lib/blob/blobstore.o 00:27:56.910 CC lib/blob/request.o 00:27:56.910 CC lib/init/json_config.o 00:27:56.910 CC lib/init/subsystem.o 00:27:57.168 CC lib/init/subsystem_rpc.o 00:27:57.168 CC lib/init/rpc.o 00:27:57.168 CC lib/blob/zeroes.o 00:27:57.168 CC lib/accel/accel_sw.o 00:27:57.168 CC lib/blob/blob_bs_dev.o 00:27:57.426 LIB libspdk_init.a 00:27:57.426 SO libspdk_init.so.6.0 00:27:57.426 SYMLINK libspdk_init.so 00:27:57.426 CC lib/virtio/virtio.o 00:27:57.426 CC lib/virtio/virtio_vhost_user.o 00:27:57.426 CC lib/virtio/virtio_vfio_user.o 00:27:57.685 CC lib/virtio/virtio_pci.o 00:27:57.685 CC lib/event/app.o 00:27:57.685 CC lib/fsdev/fsdev.o 00:27:57.685 CC lib/fsdev/fsdev_io.o 00:27:57.685 CC lib/fsdev/fsdev_rpc.o 00:27:57.942 CC lib/event/reactor.o 00:27:57.942 LIB libspdk_virtio.a 00:27:57.942 CC lib/event/log_rpc.o 00:27:57.942 CC lib/event/app_rpc.o 00:27:57.942 SO libspdk_virtio.so.7.0 00:27:57.942 LIB libspdk_accel.a 00:27:57.942 SO libspdk_accel.so.16.0 00:27:57.942 SYMLINK libspdk_virtio.so 00:27:57.942 CC lib/event/scheduler_static.o 00:27:58.199 SYMLINK libspdk_accel.so 00:27:58.199 LIB libspdk_nvme.a 00:27:58.199 SO libspdk_nvme.so.15.0 00:27:58.457 CC lib/bdev/bdev.o 00:27:58.457 CC lib/bdev/bdev_rpc.o 00:27:58.457 CC lib/bdev/scsi_nvme.o 00:27:58.457 CC lib/bdev/bdev_zone.o 00:27:58.457 CC lib/bdev/part.o 00:27:58.457 LIB libspdk_fsdev.a 00:27:58.457 SO libspdk_fsdev.so.2.0 00:27:58.457 LIB libspdk_event.a 00:27:58.457 SYMLINK libspdk_fsdev.so 00:27:58.457 SO libspdk_event.so.14.0 00:27:58.713 SYMLINK libspdk_nvme.so 00:27:58.713 SYMLINK libspdk_event.so 00:27:58.713 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:27:59.646 LIB libspdk_fuse_dispatcher.a 00:27:59.646 SO libspdk_fuse_dispatcher.so.1.0 00:27:59.646 SYMLINK libspdk_fuse_dispatcher.so 00:28:01.035 LIB libspdk_blob.a 00:28:01.035 SO libspdk_blob.so.12.0 00:28:01.295 SYMLINK libspdk_blob.so 00:28:01.553 LIB libspdk_bdev.a 00:28:01.553 CC lib/lvol/lvol.o 00:28:01.553 CC lib/blobfs/blobfs.o 00:28:01.553 CC lib/blobfs/tree.o 00:28:01.813 SO libspdk_bdev.so.17.0 00:28:01.813 SYMLINK libspdk_bdev.so 00:28:02.071 CC lib/nvmf/ctrlr.o 00:28:02.071 CC lib/nvmf/ctrlr_discovery.o 00:28:02.071 CC lib/nvmf/subsystem.o 00:28:02.071 CC lib/nvmf/ctrlr_bdev.o 00:28:02.071 CC lib/scsi/dev.o 00:28:02.071 CC lib/nbd/nbd.o 00:28:02.071 CC lib/ublk/ublk.o 00:28:02.071 CC lib/ftl/ftl_core.o 00:28:02.331 CC lib/scsi/lun.o 00:28:02.590 CC lib/ftl/ftl_init.o 00:28:02.590 CC lib/nbd/nbd_rpc.o 00:28:02.590 LIB libspdk_blobfs.a 00:28:02.590 CC lib/scsi/port.o 00:28:02.590 SO libspdk_blobfs.so.11.0 00:28:02.850 CC lib/ftl/ftl_layout.o 00:28:02.850 LIB libspdk_nbd.a 00:28:02.850 SYMLINK libspdk_blobfs.so 00:28:02.850 CC lib/ublk/ublk_rpc.o 00:28:02.850 CC lib/ftl/ftl_debug.o 00:28:02.850 SO libspdk_nbd.so.7.0 00:28:02.850 CC lib/scsi/scsi.o 00:28:02.850 LIB libspdk_lvol.a 00:28:02.850 SO libspdk_lvol.so.11.0 00:28:02.850 CC lib/nvmf/nvmf.o 00:28:02.850 SYMLINK libspdk_nbd.so 00:28:02.850 CC lib/nvmf/nvmf_rpc.o 00:28:02.850 SYMLINK libspdk_lvol.so 00:28:02.850 CC lib/scsi/scsi_bdev.o 00:28:02.850 LIB libspdk_ublk.a 00:28:02.850 CC lib/scsi/scsi_pr.o 00:28:03.110 CC lib/ftl/ftl_io.o 00:28:03.110 SO libspdk_ublk.so.3.0 00:28:03.110 CC lib/ftl/ftl_sb.o 00:28:03.110 SYMLINK libspdk_ublk.so 00:28:03.110 CC lib/ftl/ftl_l2p.o 00:28:03.110 CC lib/ftl/ftl_l2p_flat.o 00:28:03.371 CC lib/scsi/scsi_rpc.o 00:28:03.371 CC lib/ftl/ftl_nv_cache.o 00:28:03.371 CC lib/ftl/ftl_band.o 00:28:03.371 CC lib/scsi/task.o 00:28:03.371 CC lib/ftl/ftl_band_ops.o 00:28:03.371 CC lib/ftl/ftl_writer.o 00:28:03.630 CC lib/ftl/ftl_rq.o 00:28:03.630 LIB libspdk_scsi.a 00:28:03.630 CC lib/nvmf/transport.o 00:28:03.630 SO libspdk_scsi.so.9.0 00:28:03.630 CC lib/ftl/ftl_reloc.o 00:28:03.630 SYMLINK libspdk_scsi.so 00:28:03.630 CC lib/ftl/ftl_l2p_cache.o 00:28:03.630 CC lib/ftl/ftl_p2l.o 00:28:03.889 CC lib/iscsi/conn.o 00:28:03.889 CC lib/vhost/vhost.o 00:28:03.889 CC lib/nvmf/tcp.o 00:28:03.889 CC lib/ftl/ftl_p2l_log.o 00:28:04.148 CC lib/ftl/mngt/ftl_mngt.o 00:28:04.148 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:28:04.408 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:28:04.408 CC lib/vhost/vhost_rpc.o 00:28:04.408 CC lib/nvmf/stubs.o 00:28:04.408 CC lib/nvmf/mdns_server.o 00:28:04.408 CC lib/nvmf/rdma.o 00:28:04.408 CC lib/nvmf/auth.o 00:28:04.408 CC lib/iscsi/init_grp.o 00:28:04.408 CC lib/ftl/mngt/ftl_mngt_startup.o 00:28:04.668 CC lib/iscsi/iscsi.o 00:28:04.668 CC lib/ftl/mngt/ftl_mngt_md.o 00:28:04.668 CC lib/ftl/mngt/ftl_mngt_misc.o 00:28:04.928 CC lib/vhost/vhost_scsi.o 00:28:04.928 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:28:04.928 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:28:04.928 CC lib/vhost/vhost_blk.o 00:28:05.187 CC lib/iscsi/param.o 00:28:05.187 CC lib/ftl/mngt/ftl_mngt_band.o 00:28:05.187 CC lib/vhost/rte_vhost_user.o 00:28:05.187 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:28:05.446 CC lib/iscsi/portal_grp.o 00:28:05.446 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:28:05.705 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:28:05.705 CC lib/iscsi/tgt_node.o 00:28:05.705 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:28:05.705 CC lib/iscsi/iscsi_subsystem.o 00:28:06.030 CC lib/ftl/utils/ftl_conf.o 00:28:06.030 CC lib/ftl/utils/ftl_md.o 00:28:06.030 CC lib/ftl/utils/ftl_mempool.o 00:28:06.030 CC lib/iscsi/iscsi_rpc.o 00:28:06.030 CC lib/iscsi/task.o 00:28:06.289 CC lib/ftl/utils/ftl_bitmap.o 00:28:06.289 CC lib/ftl/utils/ftl_property.o 00:28:06.289 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:28:06.289 LIB libspdk_vhost.a 00:28:06.289 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:28:06.289 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:28:06.289 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:28:06.289 SO libspdk_vhost.so.8.0 00:28:06.548 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:28:06.548 SYMLINK libspdk_vhost.so 00:28:06.548 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:28:06.548 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:28:06.548 CC lib/ftl/upgrade/ftl_sb_v3.o 00:28:06.548 CC lib/ftl/upgrade/ftl_sb_v5.o 00:28:06.548 CC lib/ftl/nvc/ftl_nvc_dev.o 00:28:06.548 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:28:06.548 LIB libspdk_iscsi.a 00:28:06.548 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:28:06.548 SO libspdk_iscsi.so.8.0 00:28:06.807 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:28:06.807 CC lib/ftl/base/ftl_base_dev.o 00:28:06.807 CC lib/ftl/base/ftl_base_bdev.o 00:28:06.807 CC lib/ftl/ftl_trace.o 00:28:06.807 SYMLINK libspdk_iscsi.so 00:28:07.067 LIB libspdk_ftl.a 00:28:07.327 SO libspdk_ftl.so.9.0 00:28:07.327 LIB libspdk_nvmf.a 00:28:07.327 SO libspdk_nvmf.so.20.0 00:28:07.586 SYMLINK libspdk_ftl.so 00:28:07.586 SYMLINK libspdk_nvmf.so 00:28:08.155 CC module/env_dpdk/env_dpdk_rpc.o 00:28:08.155 CC module/scheduler/dynamic/scheduler_dynamic.o 00:28:08.155 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:28:08.155 CC module/scheduler/gscheduler/gscheduler.o 00:28:08.155 CC module/blob/bdev/blob_bdev.o 00:28:08.155 CC module/keyring/file/keyring.o 00:28:08.155 CC module/accel/error/accel_error.o 00:28:08.155 CC module/accel/ioat/accel_ioat.o 00:28:08.155 CC module/sock/posix/posix.o 00:28:08.155 CC module/fsdev/aio/fsdev_aio.o 00:28:08.155 LIB libspdk_env_dpdk_rpc.a 00:28:08.155 SO libspdk_env_dpdk_rpc.so.6.0 00:28:08.155 SYMLINK libspdk_env_dpdk_rpc.so 00:28:08.155 CC module/fsdev/aio/fsdev_aio_rpc.o 00:28:08.155 CC module/keyring/file/keyring_rpc.o 00:28:08.155 LIB libspdk_scheduler_gscheduler.a 00:28:08.155 LIB libspdk_scheduler_dpdk_governor.a 00:28:08.155 SO libspdk_scheduler_gscheduler.so.4.0 00:28:08.155 SO libspdk_scheduler_dpdk_governor.so.4.0 00:28:08.155 LIB libspdk_scheduler_dynamic.a 00:28:08.155 CC module/accel/ioat/accel_ioat_rpc.o 00:28:08.418 SO libspdk_scheduler_dynamic.so.4.0 00:28:08.418 CC module/accel/error/accel_error_rpc.o 00:28:08.418 SYMLINK libspdk_scheduler_gscheduler.so 00:28:08.418 CC module/fsdev/aio/linux_aio_mgr.o 00:28:08.418 SYMLINK libspdk_scheduler_dpdk_governor.so 00:28:08.418 SYMLINK libspdk_scheduler_dynamic.so 00:28:08.418 LIB libspdk_keyring_file.a 00:28:08.418 LIB libspdk_blob_bdev.a 00:28:08.418 SO libspdk_keyring_file.so.2.0 00:28:08.418 SO libspdk_blob_bdev.so.12.0 00:28:08.418 LIB libspdk_accel_ioat.a 00:28:08.418 SO libspdk_accel_ioat.so.6.0 00:28:08.418 LIB libspdk_accel_error.a 00:28:08.418 SYMLINK libspdk_keyring_file.so 00:28:08.418 SYMLINK libspdk_blob_bdev.so 00:28:08.418 SO libspdk_accel_error.so.2.0 00:28:08.418 CC module/keyring/linux/keyring.o 00:28:08.418 CC module/accel/dsa/accel_dsa.o 00:28:08.418 CC module/accel/iaa/accel_iaa.o 00:28:08.418 SYMLINK libspdk_accel_ioat.so 00:28:08.418 CC module/keyring/linux/keyring_rpc.o 00:28:08.418 CC module/accel/dsa/accel_dsa_rpc.o 00:28:08.681 SYMLINK libspdk_accel_error.so 00:28:08.681 LIB libspdk_keyring_linux.a 00:28:08.681 CC module/accel/iaa/accel_iaa_rpc.o 00:28:08.681 SO libspdk_keyring_linux.so.1.0 00:28:08.681 CC module/bdev/delay/vbdev_delay.o 00:28:08.681 CC module/blobfs/bdev/blobfs_bdev.o 00:28:08.681 CC module/bdev/error/vbdev_error.o 00:28:08.681 SYMLINK libspdk_keyring_linux.so 00:28:08.681 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:28:08.681 LIB libspdk_accel_iaa.a 00:28:08.681 CC module/bdev/gpt/gpt.o 00:28:08.941 SO libspdk_accel_iaa.so.3.0 00:28:08.941 LIB libspdk_accel_dsa.a 00:28:08.941 SO libspdk_accel_dsa.so.5.0 00:28:08.941 CC module/bdev/lvol/vbdev_lvol.o 00:28:08.941 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:28:08.941 SYMLINK libspdk_accel_iaa.so 00:28:08.941 LIB libspdk_fsdev_aio.a 00:28:08.941 LIB libspdk_sock_posix.a 00:28:08.941 LIB libspdk_blobfs_bdev.a 00:28:08.941 SYMLINK libspdk_accel_dsa.so 00:28:08.941 SO libspdk_fsdev_aio.so.1.0 00:28:08.941 SO libspdk_blobfs_bdev.so.6.0 00:28:08.941 SO libspdk_sock_posix.so.6.0 00:28:08.941 SYMLINK libspdk_blobfs_bdev.so 00:28:08.941 SYMLINK libspdk_fsdev_aio.so 00:28:08.941 CC module/bdev/error/vbdev_error_rpc.o 00:28:08.942 CC module/bdev/gpt/vbdev_gpt.o 00:28:08.942 SYMLINK libspdk_sock_posix.so 00:28:08.942 CC module/bdev/delay/vbdev_delay_rpc.o 00:28:09.202 CC module/bdev/malloc/bdev_malloc.o 00:28:09.202 CC module/bdev/null/bdev_null.o 00:28:09.202 CC module/bdev/null/bdev_null_rpc.o 00:28:09.202 CC module/bdev/nvme/bdev_nvme.o 00:28:09.202 LIB libspdk_bdev_error.a 00:28:09.202 CC module/bdev/passthru/vbdev_passthru.o 00:28:09.202 SO libspdk_bdev_error.so.6.0 00:28:09.202 LIB libspdk_bdev_delay.a 00:28:09.202 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:28:09.463 SO libspdk_bdev_delay.so.6.0 00:28:09.463 SYMLINK libspdk_bdev_error.so 00:28:09.464 CC module/bdev/malloc/bdev_malloc_rpc.o 00:28:09.464 LIB libspdk_bdev_gpt.a 00:28:09.464 LIB libspdk_bdev_null.a 00:28:09.464 SYMLINK libspdk_bdev_delay.so 00:28:09.464 SO libspdk_bdev_gpt.so.6.0 00:28:09.464 SO libspdk_bdev_null.so.6.0 00:28:09.464 CC module/bdev/nvme/bdev_nvme_rpc.o 00:28:09.464 SYMLINK libspdk_bdev_gpt.so 00:28:09.464 LIB libspdk_bdev_lvol.a 00:28:09.464 SO libspdk_bdev_lvol.so.6.0 00:28:09.464 SYMLINK libspdk_bdev_null.so 00:28:09.464 CC module/bdev/raid/bdev_raid.o 00:28:09.464 LIB libspdk_bdev_malloc.a 00:28:09.464 LIB libspdk_bdev_passthru.a 00:28:09.726 SO libspdk_bdev_malloc.so.6.0 00:28:09.726 CC module/bdev/split/vbdev_split.o 00:28:09.726 SO libspdk_bdev_passthru.so.6.0 00:28:09.726 SYMLINK libspdk_bdev_lvol.so 00:28:09.726 CC module/bdev/zone_block/vbdev_zone_block.o 00:28:09.726 SYMLINK libspdk_bdev_malloc.so 00:28:09.726 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:28:09.726 CC module/bdev/aio/bdev_aio.o 00:28:09.726 SYMLINK libspdk_bdev_passthru.so 00:28:09.726 CC module/bdev/ftl/bdev_ftl.o 00:28:09.726 CC module/bdev/iscsi/bdev_iscsi.o 00:28:09.726 CC module/bdev/virtio/bdev_virtio_scsi.o 00:28:09.726 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:28:09.985 CC module/bdev/split/vbdev_split_rpc.o 00:28:09.985 CC module/bdev/aio/bdev_aio_rpc.o 00:28:09.985 CC module/bdev/ftl/bdev_ftl_rpc.o 00:28:09.985 LIB libspdk_bdev_split.a 00:28:09.985 LIB libspdk_bdev_zone_block.a 00:28:09.985 SO libspdk_bdev_split.so.6.0 00:28:09.985 CC module/bdev/raid/bdev_raid_rpc.o 00:28:09.985 SO libspdk_bdev_zone_block.so.6.0 00:28:10.243 SYMLINK libspdk_bdev_split.so 00:28:10.243 CC module/bdev/virtio/bdev_virtio_blk.o 00:28:10.243 SYMLINK libspdk_bdev_zone_block.so 00:28:10.243 CC module/bdev/virtio/bdev_virtio_rpc.o 00:28:10.243 LIB libspdk_bdev_aio.a 00:28:10.243 LIB libspdk_bdev_iscsi.a 00:28:10.243 SO libspdk_bdev_aio.so.6.0 00:28:10.243 SO libspdk_bdev_iscsi.so.6.0 00:28:10.243 LIB libspdk_bdev_ftl.a 00:28:10.243 CC module/bdev/nvme/nvme_rpc.o 00:28:10.243 SYMLINK libspdk_bdev_iscsi.so 00:28:10.243 SYMLINK libspdk_bdev_aio.so 00:28:10.243 CC module/bdev/raid/bdev_raid_sb.o 00:28:10.243 CC module/bdev/nvme/bdev_mdns_client.o 00:28:10.243 SO libspdk_bdev_ftl.so.6.0 00:28:10.243 CC module/bdev/raid/raid0.o 00:28:10.243 SYMLINK libspdk_bdev_ftl.so 00:28:10.501 CC module/bdev/raid/raid1.o 00:28:10.501 CC module/bdev/raid/concat.o 00:28:10.501 CC module/bdev/nvme/vbdev_opal.o 00:28:10.501 LIB libspdk_bdev_virtio.a 00:28:10.501 CC module/bdev/nvme/vbdev_opal_rpc.o 00:28:10.501 SO libspdk_bdev_virtio.so.6.0 00:28:10.501 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:28:10.501 CC module/bdev/raid/raid5f.o 00:28:10.501 SYMLINK libspdk_bdev_virtio.so 00:28:11.069 LIB libspdk_bdev_raid.a 00:28:11.328 SO libspdk_bdev_raid.so.6.0 00:28:11.328 SYMLINK libspdk_bdev_raid.so 00:28:12.284 LIB libspdk_bdev_nvme.a 00:28:12.284 SO libspdk_bdev_nvme.so.7.1 00:28:12.553 SYMLINK libspdk_bdev_nvme.so 00:28:13.119 CC module/event/subsystems/iobuf/iobuf.o 00:28:13.119 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:28:13.119 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:28:13.119 CC module/event/subsystems/keyring/keyring.o 00:28:13.119 CC module/event/subsystems/sock/sock.o 00:28:13.119 CC module/event/subsystems/fsdev/fsdev.o 00:28:13.119 CC module/event/subsystems/vmd/vmd.o 00:28:13.119 CC module/event/subsystems/vmd/vmd_rpc.o 00:28:13.119 CC module/event/subsystems/scheduler/scheduler.o 00:28:13.119 LIB libspdk_event_fsdev.a 00:28:13.119 LIB libspdk_event_sock.a 00:28:13.119 LIB libspdk_event_vmd.a 00:28:13.119 LIB libspdk_event_keyring.a 00:28:13.119 LIB libspdk_event_scheduler.a 00:28:13.119 LIB libspdk_event_vhost_blk.a 00:28:13.119 SO libspdk_event_sock.so.5.0 00:28:13.119 SO libspdk_event_fsdev.so.1.0 00:28:13.119 SO libspdk_event_keyring.so.1.0 00:28:13.119 SO libspdk_event_vmd.so.6.0 00:28:13.119 LIB libspdk_event_iobuf.a 00:28:13.119 SO libspdk_event_scheduler.so.4.0 00:28:13.119 SO libspdk_event_vhost_blk.so.3.0 00:28:13.119 SO libspdk_event_iobuf.so.3.0 00:28:13.377 SYMLINK libspdk_event_sock.so 00:28:13.377 SYMLINK libspdk_event_keyring.so 00:28:13.377 SYMLINK libspdk_event_fsdev.so 00:28:13.377 SYMLINK libspdk_event_scheduler.so 00:28:13.377 SYMLINK libspdk_event_vhost_blk.so 00:28:13.377 SYMLINK libspdk_event_vmd.so 00:28:13.377 SYMLINK libspdk_event_iobuf.so 00:28:13.634 CC module/event/subsystems/accel/accel.o 00:28:13.892 LIB libspdk_event_accel.a 00:28:13.892 SO libspdk_event_accel.so.6.0 00:28:13.892 SYMLINK libspdk_event_accel.so 00:28:14.458 CC module/event/subsystems/bdev/bdev.o 00:28:14.459 LIB libspdk_event_bdev.a 00:28:14.459 SO libspdk_event_bdev.so.6.0 00:28:14.717 SYMLINK libspdk_event_bdev.so 00:28:14.974 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:28:14.974 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:28:14.974 CC module/event/subsystems/nbd/nbd.o 00:28:14.974 CC module/event/subsystems/ublk/ublk.o 00:28:14.974 CC module/event/subsystems/scsi/scsi.o 00:28:15.233 LIB libspdk_event_ublk.a 00:28:15.233 LIB libspdk_event_nbd.a 00:28:15.233 LIB libspdk_event_scsi.a 00:28:15.233 SO libspdk_event_ublk.so.3.0 00:28:15.233 SO libspdk_event_scsi.so.6.0 00:28:15.233 SO libspdk_event_nbd.so.6.0 00:28:15.233 LIB libspdk_event_nvmf.a 00:28:15.233 SYMLINK libspdk_event_ublk.so 00:28:15.234 SO libspdk_event_nvmf.so.6.0 00:28:15.234 SYMLINK libspdk_event_nbd.so 00:28:15.234 SYMLINK libspdk_event_scsi.so 00:28:15.234 SYMLINK libspdk_event_nvmf.so 00:28:15.508 CC module/event/subsystems/iscsi/iscsi.o 00:28:15.508 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:28:15.783 LIB libspdk_event_vhost_scsi.a 00:28:15.783 LIB libspdk_event_iscsi.a 00:28:15.783 SO libspdk_event_vhost_scsi.so.3.0 00:28:15.783 SO libspdk_event_iscsi.so.6.0 00:28:15.783 SYMLINK libspdk_event_vhost_scsi.so 00:28:15.783 SYMLINK libspdk_event_iscsi.so 00:28:16.043 SO libspdk.so.6.0 00:28:16.043 SYMLINK libspdk.so 00:28:16.303 CXX app/trace/trace.o 00:28:16.303 TEST_HEADER include/spdk/accel.h 00:28:16.303 CC app/trace_record/trace_record.o 00:28:16.303 TEST_HEADER include/spdk/accel_module.h 00:28:16.303 TEST_HEADER include/spdk/assert.h 00:28:16.303 TEST_HEADER include/spdk/barrier.h 00:28:16.303 TEST_HEADER include/spdk/base64.h 00:28:16.562 TEST_HEADER include/spdk/bdev.h 00:28:16.562 TEST_HEADER include/spdk/bdev_module.h 00:28:16.562 TEST_HEADER include/spdk/bdev_zone.h 00:28:16.562 TEST_HEADER include/spdk/bit_array.h 00:28:16.562 TEST_HEADER include/spdk/bit_pool.h 00:28:16.562 TEST_HEADER include/spdk/blob_bdev.h 00:28:16.562 TEST_HEADER include/spdk/blobfs_bdev.h 00:28:16.562 TEST_HEADER include/spdk/blobfs.h 00:28:16.562 TEST_HEADER include/spdk/blob.h 00:28:16.562 TEST_HEADER include/spdk/conf.h 00:28:16.562 CC examples/interrupt_tgt/interrupt_tgt.o 00:28:16.562 TEST_HEADER include/spdk/config.h 00:28:16.562 TEST_HEADER include/spdk/cpuset.h 00:28:16.562 TEST_HEADER include/spdk/crc16.h 00:28:16.562 TEST_HEADER include/spdk/crc32.h 00:28:16.562 TEST_HEADER include/spdk/crc64.h 00:28:16.562 TEST_HEADER include/spdk/dif.h 00:28:16.562 TEST_HEADER include/spdk/dma.h 00:28:16.562 TEST_HEADER include/spdk/endian.h 00:28:16.562 TEST_HEADER include/spdk/env_dpdk.h 00:28:16.562 TEST_HEADER include/spdk/env.h 00:28:16.562 TEST_HEADER include/spdk/event.h 00:28:16.562 TEST_HEADER include/spdk/fd_group.h 00:28:16.562 TEST_HEADER include/spdk/fd.h 00:28:16.562 CC test/thread/poller_perf/poller_perf.o 00:28:16.562 TEST_HEADER include/spdk/file.h 00:28:16.562 TEST_HEADER include/spdk/fsdev.h 00:28:16.562 TEST_HEADER include/spdk/fsdev_module.h 00:28:16.562 CC examples/util/zipf/zipf.o 00:28:16.562 TEST_HEADER include/spdk/ftl.h 00:28:16.562 CC examples/ioat/perf/perf.o 00:28:16.562 TEST_HEADER include/spdk/fuse_dispatcher.h 00:28:16.562 TEST_HEADER include/spdk/gpt_spec.h 00:28:16.562 TEST_HEADER include/spdk/hexlify.h 00:28:16.562 TEST_HEADER include/spdk/histogram_data.h 00:28:16.562 TEST_HEADER include/spdk/idxd.h 00:28:16.562 TEST_HEADER include/spdk/idxd_spec.h 00:28:16.562 TEST_HEADER include/spdk/init.h 00:28:16.562 TEST_HEADER include/spdk/ioat.h 00:28:16.562 TEST_HEADER include/spdk/ioat_spec.h 00:28:16.562 TEST_HEADER include/spdk/iscsi_spec.h 00:28:16.562 TEST_HEADER include/spdk/json.h 00:28:16.562 TEST_HEADER include/spdk/jsonrpc.h 00:28:16.562 TEST_HEADER include/spdk/keyring.h 00:28:16.562 TEST_HEADER include/spdk/keyring_module.h 00:28:16.562 CC test/dma/test_dma/test_dma.o 00:28:16.562 TEST_HEADER include/spdk/likely.h 00:28:16.562 TEST_HEADER include/spdk/log.h 00:28:16.562 TEST_HEADER include/spdk/lvol.h 00:28:16.562 TEST_HEADER include/spdk/md5.h 00:28:16.562 TEST_HEADER include/spdk/memory.h 00:28:16.562 TEST_HEADER include/spdk/mmio.h 00:28:16.562 TEST_HEADER include/spdk/nbd.h 00:28:16.562 TEST_HEADER include/spdk/net.h 00:28:16.562 TEST_HEADER include/spdk/notify.h 00:28:16.562 TEST_HEADER include/spdk/nvme.h 00:28:16.562 TEST_HEADER include/spdk/nvme_intel.h 00:28:16.562 TEST_HEADER include/spdk/nvme_ocssd.h 00:28:16.562 CC test/app/bdev_svc/bdev_svc.o 00:28:16.562 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:28:16.562 TEST_HEADER include/spdk/nvme_spec.h 00:28:16.562 TEST_HEADER include/spdk/nvme_zns.h 00:28:16.562 TEST_HEADER include/spdk/nvmf_cmd.h 00:28:16.562 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:28:16.562 TEST_HEADER include/spdk/nvmf.h 00:28:16.562 TEST_HEADER include/spdk/nvmf_spec.h 00:28:16.562 TEST_HEADER include/spdk/nvmf_transport.h 00:28:16.562 TEST_HEADER include/spdk/opal.h 00:28:16.562 TEST_HEADER include/spdk/opal_spec.h 00:28:16.562 TEST_HEADER include/spdk/pci_ids.h 00:28:16.562 TEST_HEADER include/spdk/pipe.h 00:28:16.562 TEST_HEADER include/spdk/queue.h 00:28:16.562 CC test/env/mem_callbacks/mem_callbacks.o 00:28:16.562 TEST_HEADER include/spdk/reduce.h 00:28:16.562 TEST_HEADER include/spdk/rpc.h 00:28:16.562 TEST_HEADER include/spdk/scheduler.h 00:28:16.562 TEST_HEADER include/spdk/scsi.h 00:28:16.562 TEST_HEADER include/spdk/scsi_spec.h 00:28:16.562 TEST_HEADER include/spdk/sock.h 00:28:16.562 TEST_HEADER include/spdk/stdinc.h 00:28:16.562 TEST_HEADER include/spdk/string.h 00:28:16.562 TEST_HEADER include/spdk/thread.h 00:28:16.562 TEST_HEADER include/spdk/trace.h 00:28:16.562 TEST_HEADER include/spdk/trace_parser.h 00:28:16.562 LINK poller_perf 00:28:16.562 TEST_HEADER include/spdk/tree.h 00:28:16.562 TEST_HEADER include/spdk/ublk.h 00:28:16.562 TEST_HEADER include/spdk/util.h 00:28:16.562 TEST_HEADER include/spdk/uuid.h 00:28:16.562 TEST_HEADER include/spdk/version.h 00:28:16.562 TEST_HEADER include/spdk/vfio_user_pci.h 00:28:16.562 LINK zipf 00:28:16.562 TEST_HEADER include/spdk/vfio_user_spec.h 00:28:16.562 TEST_HEADER include/spdk/vhost.h 00:28:16.562 LINK interrupt_tgt 00:28:16.562 TEST_HEADER include/spdk/vmd.h 00:28:16.562 TEST_HEADER include/spdk/xor.h 00:28:16.562 TEST_HEADER include/spdk/zipf.h 00:28:16.562 CXX test/cpp_headers/accel.o 00:28:16.821 LINK spdk_trace_record 00:28:16.821 LINK ioat_perf 00:28:16.821 LINK bdev_svc 00:28:16.821 LINK spdk_trace 00:28:16.821 CXX test/cpp_headers/accel_module.o 00:28:16.821 CC examples/ioat/verify/verify.o 00:28:17.079 CC app/iscsi_tgt/iscsi_tgt.o 00:28:17.079 CC app/nvmf_tgt/nvmf_main.o 00:28:17.079 CC test/app/histogram_perf/histogram_perf.o 00:28:17.079 CXX test/cpp_headers/assert.o 00:28:17.079 CC test/app/jsoncat/jsoncat.o 00:28:17.079 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:28:17.079 CC test/app/stub/stub.o 00:28:17.079 LINK test_dma 00:28:17.079 LINK histogram_perf 00:28:17.079 LINK verify 00:28:17.079 LINK mem_callbacks 00:28:17.338 LINK iscsi_tgt 00:28:17.338 LINK nvmf_tgt 00:28:17.338 LINK jsoncat 00:28:17.338 CXX test/cpp_headers/barrier.o 00:28:17.338 LINK stub 00:28:17.338 CXX test/cpp_headers/base64.o 00:28:17.338 CC test/env/vtophys/vtophys.o 00:28:17.338 CC test/rpc_client/rpc_client_test.o 00:28:17.596 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:28:17.596 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:28:17.596 CXX test/cpp_headers/bdev.o 00:28:17.596 CC examples/thread/thread/thread_ex.o 00:28:17.596 LINK nvme_fuzz 00:28:17.596 LINK vtophys 00:28:17.596 CC app/spdk_tgt/spdk_tgt.o 00:28:17.596 CC examples/sock/hello_world/hello_sock.o 00:28:17.596 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:28:17.596 CC examples/vmd/lsvmd/lsvmd.o 00:28:17.596 LINK rpc_client_test 00:28:17.596 CXX test/cpp_headers/bdev_module.o 00:28:17.854 LINK lsvmd 00:28:17.854 CC examples/vmd/led/led.o 00:28:17.854 LINK spdk_tgt 00:28:17.854 LINK thread 00:28:17.854 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:28:17.854 LINK hello_sock 00:28:17.854 CXX test/cpp_headers/bdev_zone.o 00:28:17.854 CC test/env/memory/memory_ut.o 00:28:17.854 CXX test/cpp_headers/bit_array.o 00:28:17.854 LINK led 00:28:18.112 LINK env_dpdk_post_init 00:28:18.112 CC app/spdk_lspci/spdk_lspci.o 00:28:18.112 CXX test/cpp_headers/bit_pool.o 00:28:18.112 CXX test/cpp_headers/blob_bdev.o 00:28:18.112 LINK vhost_fuzz 00:28:18.112 CC examples/idxd/perf/perf.o 00:28:18.372 CC examples/accel/perf/accel_perf.o 00:28:18.372 CC examples/fsdev/hello_world/hello_fsdev.o 00:28:18.372 LINK spdk_lspci 00:28:18.372 CXX test/cpp_headers/blobfs_bdev.o 00:28:18.372 CC examples/blob/hello_world/hello_blob.o 00:28:18.372 CC examples/blob/cli/blobcli.o 00:28:18.633 CXX test/cpp_headers/blobfs.o 00:28:18.633 CC test/accel/dif/dif.o 00:28:18.633 CC app/spdk_nvme_perf/perf.o 00:28:18.633 LINK hello_fsdev 00:28:18.633 LINK idxd_perf 00:28:18.633 LINK hello_blob 00:28:18.633 CXX test/cpp_headers/blob.o 00:28:18.892 CXX test/cpp_headers/conf.o 00:28:18.892 LINK accel_perf 00:28:18.892 CC test/env/pci/pci_ut.o 00:28:18.892 CC examples/nvme/reconnect/reconnect.o 00:28:18.892 CC examples/nvme/hello_world/hello_world.o 00:28:19.151 LINK blobcli 00:28:19.151 CXX test/cpp_headers/config.o 00:28:19.151 CXX test/cpp_headers/cpuset.o 00:28:19.151 LINK hello_world 00:28:19.151 CC examples/nvme/nvme_manage/nvme_manage.o 00:28:19.151 LINK memory_ut 00:28:19.151 CXX test/cpp_headers/crc16.o 00:28:19.410 LINK reconnect 00:28:19.410 LINK pci_ut 00:28:19.410 CC examples/nvme/arbitration/arbitration.o 00:28:19.410 LINK iscsi_fuzz 00:28:19.410 CXX test/cpp_headers/crc32.o 00:28:19.410 CC examples/nvme/hotplug/hotplug.o 00:28:19.410 LINK dif 00:28:19.410 CC examples/nvme/cmb_copy/cmb_copy.o 00:28:19.669 LINK spdk_nvme_perf 00:28:19.669 CC examples/nvme/abort/abort.o 00:28:19.669 CXX test/cpp_headers/crc64.o 00:28:19.926 LINK hotplug 00:28:19.926 LINK cmb_copy 00:28:19.927 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:28:19.927 LINK arbitration 00:28:19.927 CC examples/bdev/hello_world/hello_bdev.o 00:28:19.927 CXX test/cpp_headers/dif.o 00:28:19.927 LINK nvme_manage 00:28:19.927 CXX test/cpp_headers/dma.o 00:28:19.927 CC test/blobfs/mkfs/mkfs.o 00:28:19.927 CC app/spdk_nvme_identify/identify.o 00:28:20.184 LINK abort 00:28:20.184 LINK pmr_persistence 00:28:20.184 CXX test/cpp_headers/endian.o 00:28:20.184 CC examples/bdev/bdevperf/bdevperf.o 00:28:20.184 LINK mkfs 00:28:20.184 LINK hello_bdev 00:28:20.185 CC app/spdk_nvme_discover/discovery_aer.o 00:28:20.185 CC app/spdk_top/spdk_top.o 00:28:20.185 CXX test/cpp_headers/env_dpdk.o 00:28:20.185 CC app/vhost/vhost.o 00:28:20.443 CC app/spdk_dd/spdk_dd.o 00:28:20.443 LINK spdk_nvme_discover 00:28:20.443 CC app/fio/nvme/fio_plugin.o 00:28:20.443 CXX test/cpp_headers/env.o 00:28:20.443 LINK vhost 00:28:20.702 CC test/event/event_perf/event_perf.o 00:28:20.702 CXX test/cpp_headers/event.o 00:28:20.702 CC test/lvol/esnap/esnap.o 00:28:20.702 CXX test/cpp_headers/fd_group.o 00:28:20.702 LINK event_perf 00:28:20.702 CXX test/cpp_headers/fd.o 00:28:20.702 CC app/fio/bdev/fio_plugin.o 00:28:20.960 LINK spdk_dd 00:28:20.960 CXX test/cpp_headers/file.o 00:28:20.960 CC test/event/reactor/reactor.o 00:28:20.960 CC test/event/reactor_perf/reactor_perf.o 00:28:20.960 CXX test/cpp_headers/fsdev.o 00:28:20.960 LINK spdk_nvme_identify 00:28:21.218 LINK spdk_nvme 00:28:21.218 LINK reactor 00:28:21.218 LINK reactor_perf 00:28:21.218 LINK bdevperf 00:28:21.218 CXX test/cpp_headers/fsdev_module.o 00:28:21.218 LINK spdk_top 00:28:21.218 CXX test/cpp_headers/ftl.o 00:28:21.218 CC test/nvme/aer/aer.o 00:28:21.218 CC test/event/app_repeat/app_repeat.o 00:28:21.477 LINK spdk_bdev 00:28:21.477 CXX test/cpp_headers/fuse_dispatcher.o 00:28:21.477 CC test/event/scheduler/scheduler.o 00:28:21.477 CXX test/cpp_headers/gpt_spec.o 00:28:21.477 LINK app_repeat 00:28:21.477 CXX test/cpp_headers/hexlify.o 00:28:21.477 CXX test/cpp_headers/histogram_data.o 00:28:21.477 CC test/bdev/bdevio/bdevio.o 00:28:21.477 CC examples/nvmf/nvmf/nvmf.o 00:28:21.735 LINK aer 00:28:21.735 CXX test/cpp_headers/idxd.o 00:28:21.735 CC test/nvme/reset/reset.o 00:28:21.735 LINK scheduler 00:28:21.735 CC test/nvme/sgl/sgl.o 00:28:21.735 CC test/nvme/e2edp/nvme_dp.o 00:28:21.735 CC test/nvme/overhead/overhead.o 00:28:21.735 CXX test/cpp_headers/idxd_spec.o 00:28:21.735 CXX test/cpp_headers/init.o 00:28:21.994 CC test/nvme/err_injection/err_injection.o 00:28:21.994 LINK nvmf 00:28:21.994 LINK bdevio 00:28:21.994 LINK reset 00:28:21.994 LINK sgl 00:28:21.994 CXX test/cpp_headers/ioat.o 00:28:21.994 LINK nvme_dp 00:28:21.994 LINK err_injection 00:28:22.252 LINK overhead 00:28:22.252 CXX test/cpp_headers/ioat_spec.o 00:28:22.252 CC test/nvme/startup/startup.o 00:28:22.252 CC test/nvme/reserve/reserve.o 00:28:22.252 CC test/nvme/simple_copy/simple_copy.o 00:28:22.252 CC test/nvme/connect_stress/connect_stress.o 00:28:22.252 CC test/nvme/boot_partition/boot_partition.o 00:28:22.252 CXX test/cpp_headers/iscsi_spec.o 00:28:22.252 CC test/nvme/compliance/nvme_compliance.o 00:28:22.252 LINK startup 00:28:22.510 CC test/nvme/fused_ordering/fused_ordering.o 00:28:22.510 CC test/nvme/doorbell_aers/doorbell_aers.o 00:28:22.510 LINK reserve 00:28:22.510 LINK boot_partition 00:28:22.510 LINK connect_stress 00:28:22.510 CXX test/cpp_headers/json.o 00:28:22.510 LINK simple_copy 00:28:22.510 LINK doorbell_aers 00:28:22.510 LINK fused_ordering 00:28:22.510 CC test/nvme/fdp/fdp.o 00:28:22.769 CXX test/cpp_headers/jsonrpc.o 00:28:22.769 CXX test/cpp_headers/keyring.o 00:28:22.769 CXX test/cpp_headers/keyring_module.o 00:28:22.769 CXX test/cpp_headers/likely.o 00:28:22.769 CC test/nvme/cuse/cuse.o 00:28:22.769 LINK nvme_compliance 00:28:22.769 CXX test/cpp_headers/log.o 00:28:22.769 CXX test/cpp_headers/lvol.o 00:28:22.769 CXX test/cpp_headers/md5.o 00:28:22.769 CXX test/cpp_headers/memory.o 00:28:22.769 CXX test/cpp_headers/mmio.o 00:28:22.769 CXX test/cpp_headers/nbd.o 00:28:22.769 CXX test/cpp_headers/net.o 00:28:23.027 CXX test/cpp_headers/notify.o 00:28:23.027 CXX test/cpp_headers/nvme.o 00:28:23.027 CXX test/cpp_headers/nvme_intel.o 00:28:23.027 CXX test/cpp_headers/nvme_ocssd.o 00:28:23.027 CXX test/cpp_headers/nvme_ocssd_spec.o 00:28:23.027 LINK fdp 00:28:23.027 CXX test/cpp_headers/nvme_spec.o 00:28:23.027 CXX test/cpp_headers/nvme_zns.o 00:28:23.027 CXX test/cpp_headers/nvmf_cmd.o 00:28:23.027 CXX test/cpp_headers/nvmf_fc_spec.o 00:28:23.027 CXX test/cpp_headers/nvmf.o 00:28:23.286 CXX test/cpp_headers/nvmf_spec.o 00:28:23.286 CXX test/cpp_headers/nvmf_transport.o 00:28:23.286 CXX test/cpp_headers/opal.o 00:28:23.286 CXX test/cpp_headers/opal_spec.o 00:28:23.286 CXX test/cpp_headers/pci_ids.o 00:28:23.286 CXX test/cpp_headers/pipe.o 00:28:23.286 CXX test/cpp_headers/queue.o 00:28:23.286 CXX test/cpp_headers/reduce.o 00:28:23.286 CXX test/cpp_headers/rpc.o 00:28:23.286 CXX test/cpp_headers/scheduler.o 00:28:23.286 CXX test/cpp_headers/scsi.o 00:28:23.286 CXX test/cpp_headers/scsi_spec.o 00:28:23.286 CXX test/cpp_headers/sock.o 00:28:23.546 CXX test/cpp_headers/stdinc.o 00:28:23.546 CXX test/cpp_headers/string.o 00:28:23.546 CXX test/cpp_headers/thread.o 00:28:23.546 CXX test/cpp_headers/trace.o 00:28:23.546 CXX test/cpp_headers/trace_parser.o 00:28:23.546 CXX test/cpp_headers/tree.o 00:28:23.546 CXX test/cpp_headers/ublk.o 00:28:23.546 CXX test/cpp_headers/util.o 00:28:23.546 CXX test/cpp_headers/uuid.o 00:28:23.546 CXX test/cpp_headers/version.o 00:28:23.546 CXX test/cpp_headers/vfio_user_pci.o 00:28:23.546 CXX test/cpp_headers/vfio_user_spec.o 00:28:23.804 CXX test/cpp_headers/vhost.o 00:28:23.804 CXX test/cpp_headers/vmd.o 00:28:23.804 CXX test/cpp_headers/xor.o 00:28:23.804 CXX test/cpp_headers/zipf.o 00:28:24.370 LINK cuse 00:28:27.655 LINK esnap 00:28:27.655 00:28:27.655 real 1m31.147s 00:28:27.655 user 8m7.357s 00:28:27.655 sys 1m39.494s 00:28:27.655 17:25:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:28:27.655 17:25:28 make -- common/autotest_common.sh@10 -- $ set +x 00:28:27.655 ************************************ 00:28:27.655 END TEST make 00:28:27.655 ************************************ 00:28:27.655 17:25:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:28:27.655 17:25:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:27.655 17:25:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:27.655 17:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:27.655 17:25:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:27.655 17:25:28 -- pm/common@44 -- $ pid=5476 00:28:27.655 17:25:28 -- pm/common@50 -- $ kill -TERM 5476 00:28:27.655 17:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:27.655 17:25:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:27.655 17:25:28 -- pm/common@44 -- $ pid=5478 00:28:27.655 17:25:28 -- pm/common@50 -- $ kill -TERM 5478 00:28:27.655 17:25:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:28:27.655 17:25:28 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:28:27.655 17:25:28 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:27.655 17:25:28 -- common/autotest_common.sh@1693 -- # lcov --version 00:28:27.655 17:25:28 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:27.914 17:25:28 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:27.914 17:25:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.914 17:25:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.914 17:25:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.914 17:25:28 -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.914 17:25:28 -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.914 17:25:28 -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.914 17:25:28 -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.914 17:25:28 -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.914 17:25:28 -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.914 17:25:28 -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.914 17:25:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.914 17:25:28 -- scripts/common.sh@344 -- # case "$op" in 00:28:27.914 17:25:28 -- scripts/common.sh@345 -- # : 1 00:28:27.914 17:25:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.914 17:25:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.914 17:25:28 -- scripts/common.sh@365 -- # decimal 1 00:28:27.914 17:25:28 -- scripts/common.sh@353 -- # local d=1 00:28:27.914 17:25:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.914 17:25:28 -- scripts/common.sh@355 -- # echo 1 00:28:27.914 17:25:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.914 17:25:28 -- scripts/common.sh@366 -- # decimal 2 00:28:27.914 17:25:28 -- scripts/common.sh@353 -- # local d=2 00:28:27.914 17:25:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.914 17:25:28 -- scripts/common.sh@355 -- # echo 2 00:28:27.914 17:25:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.914 17:25:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.914 17:25:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.914 17:25:28 -- scripts/common.sh@368 -- # return 0 00:28:27.914 17:25:28 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.914 17:25:28 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.914 --rc genhtml_branch_coverage=1 00:28:27.914 --rc genhtml_function_coverage=1 00:28:27.914 --rc genhtml_legend=1 00:28:27.914 --rc geninfo_all_blocks=1 00:28:27.914 --rc geninfo_unexecuted_blocks=1 00:28:27.914 00:28:27.914 ' 00:28:27.914 17:25:28 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.914 --rc genhtml_branch_coverage=1 00:28:27.914 --rc genhtml_function_coverage=1 00:28:27.914 --rc genhtml_legend=1 00:28:27.914 --rc geninfo_all_blocks=1 00:28:27.914 --rc geninfo_unexecuted_blocks=1 00:28:27.914 00:28:27.914 ' 00:28:27.914 17:25:28 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.914 --rc genhtml_branch_coverage=1 00:28:27.914 --rc genhtml_function_coverage=1 00:28:27.914 --rc genhtml_legend=1 00:28:27.914 --rc geninfo_all_blocks=1 00:28:27.914 --rc geninfo_unexecuted_blocks=1 00:28:27.914 00:28:27.914 ' 00:28:27.914 17:25:28 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:27.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.914 --rc genhtml_branch_coverage=1 00:28:27.914 --rc genhtml_function_coverage=1 00:28:27.914 --rc genhtml_legend=1 00:28:27.914 --rc geninfo_all_blocks=1 00:28:27.914 --rc geninfo_unexecuted_blocks=1 00:28:27.914 00:28:27.914 ' 00:28:27.914 17:25:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:27.914 17:25:28 -- nvmf/common.sh@7 -- # uname -s 00:28:27.914 17:25:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.914 17:25:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.914 17:25:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.914 17:25:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.914 17:25:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.914 17:25:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.914 17:25:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.914 17:25:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.914 17:25:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.914 17:25:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.914 17:25:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b224b750-caac-4cbd-bbde-095c4ddf7e9f 00:28:27.914 17:25:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=b224b750-caac-4cbd-bbde-095c4ddf7e9f 00:28:27.914 17:25:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.914 17:25:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.914 17:25:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:27.914 17:25:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.914 17:25:28 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:27.914 17:25:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:28:27.914 17:25:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.914 17:25:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.914 17:25:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.914 17:25:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.914 17:25:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.914 17:25:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.914 17:25:28 -- paths/export.sh@5 -- # export PATH 00:28:27.914 17:25:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.914 17:25:28 -- nvmf/common.sh@51 -- # : 0 00:28:27.914 17:25:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:27.914 17:25:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:27.914 17:25:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.914 17:25:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.914 17:25:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.914 17:25:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:27.914 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:27.914 17:25:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:27.914 17:25:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:27.914 17:25:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:27.914 17:25:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:28:27.914 17:25:28 -- spdk/autotest.sh@32 -- # uname -s 00:28:27.914 17:25:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:28:27.914 17:25:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:28:27.914 17:25:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:28:27.914 17:25:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:28:27.914 17:25:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:28:27.914 17:25:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:28:27.914 17:25:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:28:27.914 17:25:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:28:27.914 17:25:28 -- spdk/autotest.sh@48 -- # udevadm_pid=54505 00:28:27.914 17:25:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:28:27.914 17:25:28 -- pm/common@17 -- # local monitor 00:28:27.914 17:25:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:28:27.914 17:25:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:28:27.914 17:25:28 -- pm/common@21 -- # date +%s 00:28:27.914 17:25:28 -- pm/common@25 -- # sleep 1 00:28:27.914 17:25:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:28:27.914 17:25:28 -- pm/common@21 -- # date +%s 00:28:27.915 17:25:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732641928 00:28:27.915 17:25:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732641928 00:28:27.915 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732641928_collect-vmstat.pm.log 00:28:27.915 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732641928_collect-cpu-load.pm.log 00:28:28.849 17:25:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:28:28.849 17:25:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:28:28.849 17:25:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.849 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:28:28.849 17:25:29 -- spdk/autotest.sh@59 -- # create_test_list 00:28:28.849 17:25:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:28:28.849 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:28:29.109 17:25:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:28:29.109 17:25:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:28:29.109 17:25:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:28:29.109 17:25:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:28:29.109 17:25:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:28:29.109 17:25:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:28:29.109 17:25:29 -- common/autotest_common.sh@1457 -- # uname 00:28:29.109 17:25:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:28:29.109 17:25:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:28:29.109 17:25:29 -- common/autotest_common.sh@1477 -- # uname 00:28:29.109 17:25:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:28:29.109 17:25:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:28:29.109 17:25:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:28:29.109 lcov: LCOV version 1.15 00:28:29.109 17:25:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:28:47.204 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:28:47.204 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:29:02.100 17:26:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:29:02.100 17:26:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.100 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:29:02.100 17:26:01 -- spdk/autotest.sh@78 -- # rm -f 00:29:02.100 17:26:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:02.100 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:02.360 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:02.360 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:02.360 17:26:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:29:02.360 17:26:02 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:29:02.360 17:26:02 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:29:02.360 17:26:02 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:29:02.360 17:26:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:02.360 17:26:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:29:02.360 17:26:02 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:02.360 17:26:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:02.360 17:26:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:02.360 17:26:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:02.360 17:26:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:29:02.360 17:26:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:29:02.360 17:26:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:02.360 17:26:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:02.360 17:26:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:02.360 17:26:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:29:02.360 17:26:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:29:02.360 17:26:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:29:02.360 17:26:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:02.360 17:26:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:02.360 17:26:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:29:02.360 17:26:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:29:02.360 17:26:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:29:02.360 17:26:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:02.360 17:26:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:29:02.360 17:26:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:29:02.360 17:26:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:29:02.360 17:26:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:29:02.360 17:26:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:29:02.360 17:26:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:29:02.360 No valid GPT data, bailing 00:29:02.360 17:26:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:02.360 17:26:02 -- scripts/common.sh@394 -- # pt= 00:29:02.360 17:26:02 -- scripts/common.sh@395 -- # return 1 00:29:02.360 17:26:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:29:02.360 1+0 records in 00:29:02.360 1+0 records out 00:29:02.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661191 s, 159 MB/s 00:29:02.360 17:26:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:29:02.360 17:26:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:29:02.360 17:26:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:29:02.360 17:26:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:29:02.360 17:26:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:29:02.360 No valid GPT data, bailing 00:29:02.360 17:26:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:02.360 17:26:03 -- scripts/common.sh@394 -- # pt= 00:29:02.360 17:26:03 -- scripts/common.sh@395 -- # return 1 00:29:02.360 17:26:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:29:02.620 1+0 records in 00:29:02.620 1+0 records out 00:29:02.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448692 s, 234 MB/s 00:29:02.620 17:26:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:29:02.620 17:26:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:29:02.620 17:26:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:29:02.620 17:26:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:29:02.620 17:26:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:29:02.620 No valid GPT data, bailing 00:29:02.620 17:26:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:29:02.620 17:26:03 -- scripts/common.sh@394 -- # pt= 00:29:02.620 17:26:03 -- scripts/common.sh@395 -- # return 1 00:29:02.620 17:26:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:29:02.620 1+0 records in 00:29:02.620 1+0 records out 00:29:02.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00376955 s, 278 MB/s 00:29:02.620 17:26:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:29:02.620 17:26:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:29:02.620 17:26:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:29:02.620 17:26:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:29:02.620 17:26:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:29:02.620 No valid GPT data, bailing 00:29:02.620 17:26:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:29:02.620 17:26:03 -- scripts/common.sh@394 -- # pt= 00:29:02.620 17:26:03 -- scripts/common.sh@395 -- # return 1 00:29:02.620 17:26:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:29:02.620 1+0 records in 00:29:02.620 1+0 records out 00:29:02.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00582164 s, 180 MB/s 00:29:02.620 17:26:03 -- spdk/autotest.sh@105 -- # sync 00:29:02.880 17:26:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:29:02.880 17:26:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:29:02.880 17:26:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:29:05.418 17:26:05 -- spdk/autotest.sh@111 -- # uname -s 00:29:05.418 17:26:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:29:05.418 17:26:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:29:05.418 17:26:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:29:06.353 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:06.353 Hugepages 00:29:06.353 node hugesize free / total 00:29:06.353 node0 1048576kB 0 / 0 00:29:06.353 node0 2048kB 0 / 0 00:29:06.353 00:29:06.353 Type BDF Vendor Device NUMA Driver Device Block devices 00:29:06.353 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:29:06.353 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:29:06.612 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:29:06.612 17:26:07 -- spdk/autotest.sh@117 -- # uname -s 00:29:06.612 17:26:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:29:06.612 17:26:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:29:06.612 17:26:07 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:07.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:07.549 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:07.549 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:07.549 17:26:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:29:08.929 17:26:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:29:08.929 17:26:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:29:08.929 17:26:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:08.929 17:26:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:08.929 17:26:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:08.929 17:26:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:08.929 17:26:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:08.929 17:26:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:08.929 17:26:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:08.929 17:26:09 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:29:08.929 17:26:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:08.929 17:26:09 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:09.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:09.188 Waiting for block devices as requested 00:29:09.188 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:09.447 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:09.447 17:26:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:29:09.447 17:26:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:29:09.447 17:26:09 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:29:09.447 17:26:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:29:09.447 17:26:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:29:09.447 17:26:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:29:09.447 17:26:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:29:09.447 17:26:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:29:09.447 17:26:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:29:09.447 17:26:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:29:09.447 17:26:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:29:09.447 17:26:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:29:09.447 17:26:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:29:09.447 17:26:10 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:29:09.447 17:26:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:29:09.447 17:26:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:29:09.447 17:26:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:29:09.447 17:26:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:29:09.447 17:26:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:29:09.447 17:26:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:29:09.447 17:26:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:29:09.447 17:26:10 -- common/autotest_common.sh@1543 -- # continue 00:29:09.447 17:26:10 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:29:09.447 17:26:10 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:29:09.447 17:26:10 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:29:09.447 17:26:10 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:29:09.447 17:26:10 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:29:09.447 17:26:10 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:29:09.447 17:26:10 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:29:09.447 17:26:10 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:29:09.447 17:26:10 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:29:09.447 17:26:10 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:29:09.447 17:26:10 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:29:09.447 17:26:10 -- common/autotest_common.sh@1531 -- # grep oacs 00:29:09.447 17:26:10 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:29:09.447 17:26:10 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:29:09.447 17:26:10 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:29:09.447 17:26:10 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:29:09.447 17:26:10 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:29:09.447 17:26:10 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:29:09.447 17:26:10 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:29:09.447 17:26:10 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:29:09.447 17:26:10 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:29:09.447 17:26:10 -- common/autotest_common.sh@1543 -- # continue 00:29:09.447 17:26:10 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:29:09.447 17:26:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:09.447 17:26:10 -- common/autotest_common.sh@10 -- # set +x 00:29:09.447 17:26:10 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:29:09.447 17:26:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.447 17:26:10 -- common/autotest_common.sh@10 -- # set +x 00:29:09.707 17:26:10 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:10.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:10.276 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:10.276 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:10.276 17:26:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:29:10.276 17:26:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.276 17:26:10 -- common/autotest_common.sh@10 -- # set +x 00:29:10.536 17:26:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:29:10.536 17:26:11 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:29:10.536 17:26:11 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:29:10.536 17:26:11 -- common/autotest_common.sh@1563 -- # bdfs=() 00:29:10.536 17:26:11 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:29:10.536 17:26:11 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:29:10.536 17:26:11 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:29:10.536 17:26:11 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:29:10.536 17:26:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:10.536 17:26:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:10.536 17:26:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:10.536 17:26:11 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:10.536 17:26:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:10.536 17:26:11 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:29:10.536 17:26:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:10.536 17:26:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:29:10.536 17:26:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:29:10.536 17:26:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:29:10.536 17:26:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:29:10.536 17:26:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:29:10.536 17:26:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:29:10.536 17:26:11 -- common/autotest_common.sh@1566 -- # device=0x0010 00:29:10.536 17:26:11 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:29:10.536 17:26:11 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:29:10.536 17:26:11 -- common/autotest_common.sh@1572 -- # return 0 00:29:10.536 17:26:11 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:29:10.536 17:26:11 -- common/autotest_common.sh@1580 -- # return 0 00:29:10.536 17:26:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:29:10.536 17:26:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:29:10.536 17:26:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:29:10.536 17:26:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:29:10.536 17:26:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:29:10.536 17:26:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.536 17:26:11 -- common/autotest_common.sh@10 -- # set +x 00:29:10.536 17:26:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:29:10.536 17:26:11 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:29:10.536 17:26:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:10.536 17:26:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.536 17:26:11 -- common/autotest_common.sh@10 -- # set +x 00:29:10.536 ************************************ 00:29:10.536 START TEST env 00:29:10.536 ************************************ 00:29:10.537 17:26:11 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:29:10.537 * Looking for test storage... 00:29:10.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:29:10.795 17:26:11 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:10.796 17:26:11 env -- common/autotest_common.sh@1693 -- # lcov --version 00:29:10.796 17:26:11 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:10.796 17:26:11 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:10.796 17:26:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.796 17:26:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.796 17:26:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.796 17:26:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.796 17:26:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.796 17:26:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.796 17:26:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.796 17:26:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.796 17:26:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.796 17:26:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.796 17:26:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.796 17:26:11 env -- scripts/common.sh@344 -- # case "$op" in 00:29:10.796 17:26:11 env -- scripts/common.sh@345 -- # : 1 00:29:10.796 17:26:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.796 17:26:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.796 17:26:11 env -- scripts/common.sh@365 -- # decimal 1 00:29:10.796 17:26:11 env -- scripts/common.sh@353 -- # local d=1 00:29:10.796 17:26:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.796 17:26:11 env -- scripts/common.sh@355 -- # echo 1 00:29:10.796 17:26:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.796 17:26:11 env -- scripts/common.sh@366 -- # decimal 2 00:29:10.796 17:26:11 env -- scripts/common.sh@353 -- # local d=2 00:29:10.796 17:26:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.796 17:26:11 env -- scripts/common.sh@355 -- # echo 2 00:29:10.796 17:26:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.796 17:26:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.796 17:26:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.796 17:26:11 env -- scripts/common.sh@368 -- # return 0 00:29:10.796 17:26:11 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.796 17:26:11 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:10.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.796 --rc genhtml_branch_coverage=1 00:29:10.796 --rc genhtml_function_coverage=1 00:29:10.796 --rc genhtml_legend=1 00:29:10.796 --rc geninfo_all_blocks=1 00:29:10.796 --rc geninfo_unexecuted_blocks=1 00:29:10.796 00:29:10.796 ' 00:29:10.796 17:26:11 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:10.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.796 --rc genhtml_branch_coverage=1 00:29:10.796 --rc genhtml_function_coverage=1 00:29:10.796 --rc genhtml_legend=1 00:29:10.796 --rc geninfo_all_blocks=1 00:29:10.796 --rc geninfo_unexecuted_blocks=1 00:29:10.796 00:29:10.796 ' 00:29:10.796 17:26:11 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:10.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.796 --rc genhtml_branch_coverage=1 00:29:10.796 --rc genhtml_function_coverage=1 00:29:10.796 --rc genhtml_legend=1 00:29:10.796 --rc geninfo_all_blocks=1 00:29:10.796 --rc geninfo_unexecuted_blocks=1 00:29:10.796 00:29:10.796 ' 00:29:10.796 17:26:11 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:10.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.796 --rc genhtml_branch_coverage=1 00:29:10.796 --rc genhtml_function_coverage=1 00:29:10.796 --rc genhtml_legend=1 00:29:10.796 --rc geninfo_all_blocks=1 00:29:10.796 --rc geninfo_unexecuted_blocks=1 00:29:10.796 00:29:10.796 ' 00:29:10.796 17:26:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:29:10.796 17:26:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:10.796 17:26:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.796 17:26:11 env -- common/autotest_common.sh@10 -- # set +x 00:29:10.796 ************************************ 00:29:10.796 START TEST env_memory 00:29:10.796 ************************************ 00:29:10.796 17:26:11 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:29:10.796 00:29:10.796 00:29:10.796 CUnit - A unit testing framework for C - Version 2.1-3 00:29:10.796 http://cunit.sourceforge.net/ 00:29:10.796 00:29:10.796 00:29:10.796 Suite: memory 00:29:10.796 Test: alloc and free memory map ...[2024-11-26 17:26:11.419460] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:29:10.796 passed 00:29:10.796 Test: mem map translation ...[2024-11-26 17:26:11.476887] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:29:10.796 [2024-11-26 17:26:11.476950] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:29:10.796 [2024-11-26 17:26:11.477043] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:29:10.796 [2024-11-26 17:26:11.477066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:29:11.055 passed 00:29:11.055 Test: mem map registration ...[2024-11-26 17:26:11.554764] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:29:11.055 [2024-11-26 17:26:11.554821] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:29:11.055 passed 00:29:11.055 Test: mem map adjacent registrations ...passed 00:29:11.055 00:29:11.055 Run Summary: Type Total Ran Passed Failed Inactive 00:29:11.055 suites 1 1 n/a 0 0 00:29:11.055 tests 4 4 4 0 0 00:29:11.055 asserts 152 152 152 0 n/a 00:29:11.055 00:29:11.055 Elapsed time = 0.295 seconds 00:29:11.055 00:29:11.055 real 0m0.340s 00:29:11.055 user 0m0.302s 00:29:11.055 sys 0m0.030s 00:29:11.055 17:26:11 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.055 17:26:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:29:11.055 ************************************ 00:29:11.055 END TEST env_memory 00:29:11.055 ************************************ 00:29:11.055 17:26:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:29:11.055 17:26:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:11.055 17:26:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.055 17:26:11 env -- common/autotest_common.sh@10 -- # set +x 00:29:11.055 ************************************ 00:29:11.055 START TEST env_vtophys 00:29:11.055 ************************************ 00:29:11.055 17:26:11 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:29:11.315 EAL: lib.eal log level changed from notice to debug 00:29:11.315 EAL: Detected lcore 0 as core 0 on socket 0 00:29:11.315 EAL: Detected lcore 1 as core 0 on socket 0 00:29:11.315 EAL: Detected lcore 2 as core 0 on socket 0 00:29:11.315 EAL: Detected lcore 3 as core 0 on socket 0 00:29:11.315 EAL: Detected lcore 4 as core 0 on socket 0 00:29:11.315 EAL: Detected lcore 5 as core 0 on socket 0 00:29:11.315 EAL: Detected lcore 6 as core 0 on socket 0 00:29:11.315 EAL: Detected lcore 7 as core 0 on socket 0 00:29:11.315 EAL: Detected lcore 8 as core 0 on socket 0 00:29:11.315 EAL: Detected lcore 9 as core 0 on socket 0 00:29:11.315 EAL: Maximum logical cores by configuration: 128 00:29:11.315 EAL: Detected CPU lcores: 10 00:29:11.315 EAL: Detected NUMA nodes: 1 00:29:11.315 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:29:11.315 EAL: Detected shared linkage of DPDK 00:29:11.315 EAL: No shared files mode enabled, IPC will be disabled 00:29:11.315 EAL: Selected IOVA mode 'PA' 00:29:11.315 EAL: Probing VFIO support... 00:29:11.315 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:29:11.315 EAL: VFIO modules not loaded, skipping VFIO support... 00:29:11.315 EAL: Ask a virtual area of 0x2e000 bytes 00:29:11.315 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:29:11.315 EAL: Setting up physically contiguous memory... 00:29:11.315 EAL: Setting maximum number of open files to 524288 00:29:11.315 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:29:11.315 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:29:11.315 EAL: Ask a virtual area of 0x61000 bytes 00:29:11.315 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:29:11.315 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:11.315 EAL: Ask a virtual area of 0x400000000 bytes 00:29:11.315 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:29:11.315 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:29:11.315 EAL: Ask a virtual area of 0x61000 bytes 00:29:11.315 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:29:11.315 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:11.315 EAL: Ask a virtual area of 0x400000000 bytes 00:29:11.315 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:29:11.316 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:29:11.316 EAL: Ask a virtual area of 0x61000 bytes 00:29:11.316 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:29:11.316 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:11.316 EAL: Ask a virtual area of 0x400000000 bytes 00:29:11.316 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:29:11.316 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:29:11.316 EAL: Ask a virtual area of 0x61000 bytes 00:29:11.316 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:29:11.316 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:29:11.316 EAL: Ask a virtual area of 0x400000000 bytes 00:29:11.316 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:29:11.316 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:29:11.316 EAL: Hugepages will be freed exactly as allocated. 00:29:11.316 EAL: No shared files mode enabled, IPC is disabled 00:29:11.316 EAL: No shared files mode enabled, IPC is disabled 00:29:11.316 EAL: TSC frequency is ~2290000 KHz 00:29:11.316 EAL: Main lcore 0 is ready (tid=7f6c462c9a40;cpuset=[0]) 00:29:11.316 EAL: Trying to obtain current memory policy. 00:29:11.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:11.316 EAL: Restoring previous memory policy: 0 00:29:11.316 EAL: request: mp_malloc_sync 00:29:11.316 EAL: No shared files mode enabled, IPC is disabled 00:29:11.316 EAL: Heap on socket 0 was expanded by 2MB 00:29:11.316 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:29:11.316 EAL: No PCI address specified using 'addr=' in: bus=pci 00:29:11.316 EAL: Mem event callback 'spdk:(nil)' registered 00:29:11.316 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:29:11.316 00:29:11.316 00:29:11.316 CUnit - A unit testing framework for C - Version 2.1-3 00:29:11.316 http://cunit.sourceforge.net/ 00:29:11.316 00:29:11.316 00:29:11.316 Suite: components_suite 00:29:11.884 Test: vtophys_malloc_test ...passed 00:29:11.884 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:29:11.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:11.884 EAL: Restoring previous memory policy: 4 00:29:11.884 EAL: Calling mem event callback 'spdk:(nil)' 00:29:11.884 EAL: request: mp_malloc_sync 00:29:11.884 EAL: No shared files mode enabled, IPC is disabled 00:29:11.884 EAL: Heap on socket 0 was expanded by 4MB 00:29:11.884 EAL: Calling mem event callback 'spdk:(nil)' 00:29:11.884 EAL: request: mp_malloc_sync 00:29:11.884 EAL: No shared files mode enabled, IPC is disabled 00:29:11.884 EAL: Heap on socket 0 was shrunk by 4MB 00:29:11.884 EAL: Trying to obtain current memory policy. 00:29:11.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:11.884 EAL: Restoring previous memory policy: 4 00:29:11.884 EAL: Calling mem event callback 'spdk:(nil)' 00:29:11.884 EAL: request: mp_malloc_sync 00:29:11.884 EAL: No shared files mode enabled, IPC is disabled 00:29:11.884 EAL: Heap on socket 0 was expanded by 6MB 00:29:11.884 EAL: Calling mem event callback 'spdk:(nil)' 00:29:11.884 EAL: request: mp_malloc_sync 00:29:11.884 EAL: No shared files mode enabled, IPC is disabled 00:29:11.884 EAL: Heap on socket 0 was shrunk by 6MB 00:29:11.884 EAL: Trying to obtain current memory policy. 00:29:11.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:11.884 EAL: Restoring previous memory policy: 4 00:29:11.884 EAL: Calling mem event callback 'spdk:(nil)' 00:29:11.884 EAL: request: mp_malloc_sync 00:29:11.884 EAL: No shared files mode enabled, IPC is disabled 00:29:11.884 EAL: Heap on socket 0 was expanded by 10MB 00:29:11.884 EAL: Calling mem event callback 'spdk:(nil)' 00:29:11.884 EAL: request: mp_malloc_sync 00:29:11.884 EAL: No shared files mode enabled, IPC is disabled 00:29:11.884 EAL: Heap on socket 0 was shrunk by 10MB 00:29:11.884 EAL: Trying to obtain current memory policy. 00:29:11.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:11.884 EAL: Restoring previous memory policy: 4 00:29:11.884 EAL: Calling mem event callback 'spdk:(nil)' 00:29:11.884 EAL: request: mp_malloc_sync 00:29:11.884 EAL: No shared files mode enabled, IPC is disabled 00:29:11.884 EAL: Heap on socket 0 was expanded by 18MB 00:29:11.884 EAL: Calling mem event callback 'spdk:(nil)' 00:29:11.884 EAL: request: mp_malloc_sync 00:29:11.884 EAL: No shared files mode enabled, IPC is disabled 00:29:11.884 EAL: Heap on socket 0 was shrunk by 18MB 00:29:11.884 EAL: Trying to obtain current memory policy. 00:29:11.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:11.884 EAL: Restoring previous memory policy: 4 00:29:11.884 EAL: Calling mem event callback 'spdk:(nil)' 00:29:11.884 EAL: request: mp_malloc_sync 00:29:11.884 EAL: No shared files mode enabled, IPC is disabled 00:29:11.884 EAL: Heap on socket 0 was expanded by 34MB 00:29:12.144 EAL: Calling mem event callback 'spdk:(nil)' 00:29:12.144 EAL: request: mp_malloc_sync 00:29:12.144 EAL: No shared files mode enabled, IPC is disabled 00:29:12.144 EAL: Heap on socket 0 was shrunk by 34MB 00:29:12.144 EAL: Trying to obtain current memory policy. 00:29:12.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:12.144 EAL: Restoring previous memory policy: 4 00:29:12.144 EAL: Calling mem event callback 'spdk:(nil)' 00:29:12.144 EAL: request: mp_malloc_sync 00:29:12.144 EAL: No shared files mode enabled, IPC is disabled 00:29:12.144 EAL: Heap on socket 0 was expanded by 66MB 00:29:12.144 EAL: Calling mem event callback 'spdk:(nil)' 00:29:12.403 EAL: request: mp_malloc_sync 00:29:12.403 EAL: No shared files mode enabled, IPC is disabled 00:29:12.403 EAL: Heap on socket 0 was shrunk by 66MB 00:29:12.403 EAL: Trying to obtain current memory policy. 00:29:12.403 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:12.403 EAL: Restoring previous memory policy: 4 00:29:12.403 EAL: Calling mem event callback 'spdk:(nil)' 00:29:12.403 EAL: request: mp_malloc_sync 00:29:12.403 EAL: No shared files mode enabled, IPC is disabled 00:29:12.403 EAL: Heap on socket 0 was expanded by 130MB 00:29:12.663 EAL: Calling mem event callback 'spdk:(nil)' 00:29:12.663 EAL: request: mp_malloc_sync 00:29:12.663 EAL: No shared files mode enabled, IPC is disabled 00:29:12.663 EAL: Heap on socket 0 was shrunk by 130MB 00:29:12.921 EAL: Trying to obtain current memory policy. 00:29:12.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:12.921 EAL: Restoring previous memory policy: 4 00:29:12.921 EAL: Calling mem event callback 'spdk:(nil)' 00:29:12.921 EAL: request: mp_malloc_sync 00:29:12.921 EAL: No shared files mode enabled, IPC is disabled 00:29:12.921 EAL: Heap on socket 0 was expanded by 258MB 00:29:13.488 EAL: Calling mem event callback 'spdk:(nil)' 00:29:13.488 EAL: request: mp_malloc_sync 00:29:13.488 EAL: No shared files mode enabled, IPC is disabled 00:29:13.488 EAL: Heap on socket 0 was shrunk by 258MB 00:29:14.056 EAL: Trying to obtain current memory policy. 00:29:14.056 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:14.056 EAL: Restoring previous memory policy: 4 00:29:14.056 EAL: Calling mem event callback 'spdk:(nil)' 00:29:14.056 EAL: request: mp_malloc_sync 00:29:14.056 EAL: No shared files mode enabled, IPC is disabled 00:29:14.056 EAL: Heap on socket 0 was expanded by 514MB 00:29:14.991 EAL: Calling mem event callback 'spdk:(nil)' 00:29:15.251 EAL: request: mp_malloc_sync 00:29:15.251 EAL: No shared files mode enabled, IPC is disabled 00:29:15.251 EAL: Heap on socket 0 was shrunk by 514MB 00:29:16.188 EAL: Trying to obtain current memory policy. 00:29:16.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:29:16.188 EAL: Restoring previous memory policy: 4 00:29:16.188 EAL: Calling mem event callback 'spdk:(nil)' 00:29:16.188 EAL: request: mp_malloc_sync 00:29:16.188 EAL: No shared files mode enabled, IPC is disabled 00:29:16.188 EAL: Heap on socket 0 was expanded by 1026MB 00:29:18.095 EAL: Calling mem event callback 'spdk:(nil)' 00:29:18.354 EAL: request: mp_malloc_sync 00:29:18.354 EAL: No shared files mode enabled, IPC is disabled 00:29:18.354 EAL: Heap on socket 0 was shrunk by 1026MB 00:29:20.259 passed 00:29:20.259 00:29:20.259 Run Summary: Type Total Ran Passed Failed Inactive 00:29:20.259 suites 1 1 n/a 0 0 00:29:20.259 tests 2 2 2 0 0 00:29:20.259 asserts 5866 5866 5866 0 n/a 00:29:20.259 00:29:20.259 Elapsed time = 8.803 seconds 00:29:20.259 EAL: Calling mem event callback 'spdk:(nil)' 00:29:20.259 EAL: request: mp_malloc_sync 00:29:20.259 EAL: No shared files mode enabled, IPC is disabled 00:29:20.259 EAL: Heap on socket 0 was shrunk by 2MB 00:29:20.259 EAL: No shared files mode enabled, IPC is disabled 00:29:20.259 EAL: No shared files mode enabled, IPC is disabled 00:29:20.259 EAL: No shared files mode enabled, IPC is disabled 00:29:20.259 00:29:20.259 real 0m9.133s 00:29:20.259 user 0m8.104s 00:29:20.259 sys 0m0.866s 00:29:20.259 17:26:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.259 17:26:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:29:20.259 ************************************ 00:29:20.259 END TEST env_vtophys 00:29:20.259 ************************************ 00:29:20.259 17:26:20 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:29:20.259 17:26:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:20.259 17:26:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.259 17:26:20 env -- common/autotest_common.sh@10 -- # set +x 00:29:20.259 ************************************ 00:29:20.259 START TEST env_pci 00:29:20.259 ************************************ 00:29:20.259 17:26:20 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:29:20.518 00:29:20.518 00:29:20.518 CUnit - A unit testing framework for C - Version 2.1-3 00:29:20.518 http://cunit.sourceforge.net/ 00:29:20.518 00:29:20.518 00:29:20.518 Suite: pci 00:29:20.518 Test: pci_hook ...[2024-11-26 17:26:20.983218] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56839 has claimed it 00:29:20.518 EAL: Cannot find device (10000:00:01.0) 00:29:20.518 EAL: Failed to attach device on primary process 00:29:20.518 passed 00:29:20.518 00:29:20.518 Run Summary: Type Total Ran Passed Failed Inactive 00:29:20.518 suites 1 1 n/a 0 0 00:29:20.518 tests 1 1 1 0 0 00:29:20.518 asserts 25 25 25 0 n/a 00:29:20.518 00:29:20.518 Elapsed time = 0.006 seconds 00:29:20.518 00:29:20.518 real 0m0.112s 00:29:20.518 user 0m0.047s 00:29:20.518 sys 0m0.064s 00:29:20.518 17:26:21 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.518 17:26:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:29:20.518 ************************************ 00:29:20.518 END TEST env_pci 00:29:20.518 ************************************ 00:29:20.518 17:26:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:29:20.518 17:26:21 env -- env/env.sh@15 -- # uname 00:29:20.518 17:26:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:29:20.518 17:26:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:29:20.518 17:26:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:29:20.518 17:26:21 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:20.518 17:26:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.518 17:26:21 env -- common/autotest_common.sh@10 -- # set +x 00:29:20.518 ************************************ 00:29:20.518 START TEST env_dpdk_post_init 00:29:20.518 ************************************ 00:29:20.518 17:26:21 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:29:20.518 EAL: Detected CPU lcores: 10 00:29:20.518 EAL: Detected NUMA nodes: 1 00:29:20.518 EAL: Detected shared linkage of DPDK 00:29:20.518 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:29:20.518 EAL: Selected IOVA mode 'PA' 00:29:20.777 TELEMETRY: No legacy callbacks, legacy socket not created 00:29:20.777 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:29:20.777 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:29:20.777 Starting DPDK initialization... 00:29:20.777 Starting SPDK post initialization... 00:29:20.777 SPDK NVMe probe 00:29:20.777 Attaching to 0000:00:10.0 00:29:20.777 Attaching to 0000:00:11.0 00:29:20.777 Attached to 0000:00:10.0 00:29:20.777 Attached to 0000:00:11.0 00:29:20.777 Cleaning up... 00:29:20.777 00:29:20.777 real 0m0.290s 00:29:20.777 user 0m0.097s 00:29:20.777 sys 0m0.094s 00:29:20.777 17:26:21 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.777 17:26:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:29:20.777 ************************************ 00:29:20.777 END TEST env_dpdk_post_init 00:29:20.777 ************************************ 00:29:20.777 17:26:21 env -- env/env.sh@26 -- # uname 00:29:20.777 17:26:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:29:20.777 17:26:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:29:20.777 17:26:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:20.777 17:26:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.777 17:26:21 env -- common/autotest_common.sh@10 -- # set +x 00:29:21.036 ************************************ 00:29:21.036 START TEST env_mem_callbacks 00:29:21.036 ************************************ 00:29:21.036 17:26:21 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:29:21.036 EAL: Detected CPU lcores: 10 00:29:21.036 EAL: Detected NUMA nodes: 1 00:29:21.036 EAL: Detected shared linkage of DPDK 00:29:21.036 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:29:21.036 EAL: Selected IOVA mode 'PA' 00:29:21.036 TELEMETRY: No legacy callbacks, legacy socket not created 00:29:21.036 00:29:21.036 00:29:21.036 CUnit - A unit testing framework for C - Version 2.1-3 00:29:21.036 http://cunit.sourceforge.net/ 00:29:21.036 00:29:21.036 00:29:21.036 Suite: memory 00:29:21.036 Test: test ... 00:29:21.036 register 0x200000200000 2097152 00:29:21.036 malloc 3145728 00:29:21.036 register 0x200000400000 4194304 00:29:21.036 buf 0x2000004fffc0 len 3145728 PASSED 00:29:21.036 malloc 64 00:29:21.036 buf 0x2000004ffec0 len 64 PASSED 00:29:21.036 malloc 4194304 00:29:21.036 register 0x200000800000 6291456 00:29:21.036 buf 0x2000009fffc0 len 4194304 PASSED 00:29:21.036 free 0x2000004fffc0 3145728 00:29:21.036 free 0x2000004ffec0 64 00:29:21.036 unregister 0x200000400000 4194304 PASSED 00:29:21.036 free 0x2000009fffc0 4194304 00:29:21.036 unregister 0x200000800000 6291456 PASSED 00:29:21.036 malloc 8388608 00:29:21.036 register 0x200000400000 10485760 00:29:21.036 buf 0x2000005fffc0 len 8388608 PASSED 00:29:21.036 free 0x2000005fffc0 8388608 00:29:21.295 unregister 0x200000400000 10485760 PASSED 00:29:21.295 passed 00:29:21.295 00:29:21.295 Run Summary: Type Total Ran Passed Failed Inactive 00:29:21.295 suites 1 1 n/a 0 0 00:29:21.295 tests 1 1 1 0 0 00:29:21.295 asserts 15 15 15 0 n/a 00:29:21.295 00:29:21.295 Elapsed time = 0.089 seconds 00:29:21.295 00:29:21.295 real 0m0.293s 00:29:21.295 user 0m0.105s 00:29:21.295 sys 0m0.087s 00:29:21.295 17:26:21 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.295 17:26:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:29:21.295 ************************************ 00:29:21.295 END TEST env_mem_callbacks 00:29:21.295 ************************************ 00:29:21.295 00:29:21.295 real 0m10.708s 00:29:21.295 user 0m8.873s 00:29:21.295 sys 0m1.488s 00:29:21.295 17:26:21 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.295 17:26:21 env -- common/autotest_common.sh@10 -- # set +x 00:29:21.295 ************************************ 00:29:21.295 END TEST env 00:29:21.295 ************************************ 00:29:21.295 17:26:21 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:29:21.295 17:26:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:21.295 17:26:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.295 17:26:21 -- common/autotest_common.sh@10 -- # set +x 00:29:21.295 ************************************ 00:29:21.295 START TEST rpc 00:29:21.295 ************************************ 00:29:21.295 17:26:21 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:29:21.555 * Looking for test storage... 00:29:21.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:21.555 17:26:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.555 17:26:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.555 17:26:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.555 17:26:22 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.555 17:26:22 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.555 17:26:22 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.555 17:26:22 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.555 17:26:22 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.555 17:26:22 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.555 17:26:22 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.555 17:26:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.555 17:26:22 rpc -- scripts/common.sh@344 -- # case "$op" in 00:29:21.555 17:26:22 rpc -- scripts/common.sh@345 -- # : 1 00:29:21.555 17:26:22 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.555 17:26:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.555 17:26:22 rpc -- scripts/common.sh@365 -- # decimal 1 00:29:21.555 17:26:22 rpc -- scripts/common.sh@353 -- # local d=1 00:29:21.555 17:26:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.555 17:26:22 rpc -- scripts/common.sh@355 -- # echo 1 00:29:21.555 17:26:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.555 17:26:22 rpc -- scripts/common.sh@366 -- # decimal 2 00:29:21.555 17:26:22 rpc -- scripts/common.sh@353 -- # local d=2 00:29:21.555 17:26:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.555 17:26:22 rpc -- scripts/common.sh@355 -- # echo 2 00:29:21.555 17:26:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.555 17:26:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.555 17:26:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.555 17:26:22 rpc -- scripts/common.sh@368 -- # return 0 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:21.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.555 --rc genhtml_branch_coverage=1 00:29:21.555 --rc genhtml_function_coverage=1 00:29:21.555 --rc genhtml_legend=1 00:29:21.555 --rc geninfo_all_blocks=1 00:29:21.555 --rc geninfo_unexecuted_blocks=1 00:29:21.555 00:29:21.555 ' 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:21.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.555 --rc genhtml_branch_coverage=1 00:29:21.555 --rc genhtml_function_coverage=1 00:29:21.555 --rc genhtml_legend=1 00:29:21.555 --rc geninfo_all_blocks=1 00:29:21.555 --rc geninfo_unexecuted_blocks=1 00:29:21.555 00:29:21.555 ' 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:21.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.555 --rc genhtml_branch_coverage=1 00:29:21.555 --rc genhtml_function_coverage=1 00:29:21.555 --rc genhtml_legend=1 00:29:21.555 --rc geninfo_all_blocks=1 00:29:21.555 --rc geninfo_unexecuted_blocks=1 00:29:21.555 00:29:21.555 ' 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:21.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.555 --rc genhtml_branch_coverage=1 00:29:21.555 --rc genhtml_function_coverage=1 00:29:21.555 --rc genhtml_legend=1 00:29:21.555 --rc geninfo_all_blocks=1 00:29:21.555 --rc geninfo_unexecuted_blocks=1 00:29:21.555 00:29:21.555 ' 00:29:21.555 17:26:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56972 00:29:21.555 17:26:22 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:29:21.555 17:26:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:21.555 17:26:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56972 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@835 -- # '[' -z 56972 ']' 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.555 17:26:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:21.555 [2024-11-26 17:26:22.214337] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:29:21.555 [2024-11-26 17:26:22.214462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56972 ] 00:29:21.816 [2024-11-26 17:26:22.390630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.075 [2024-11-26 17:26:22.517441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:29:22.075 [2024-11-26 17:26:22.517522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56972' to capture a snapshot of events at runtime. 00:29:22.075 [2024-11-26 17:26:22.517535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.075 [2024-11-26 17:26:22.517546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.075 [2024-11-26 17:26:22.517554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56972 for offline analysis/debug. 00:29:22.075 [2024-11-26 17:26:22.519165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.013 17:26:23 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.014 17:26:23 rpc -- common/autotest_common.sh@868 -- # return 0 00:29:23.014 17:26:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:29:23.014 17:26:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:29:23.014 17:26:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:29:23.014 17:26:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:29:23.014 17:26:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:23.014 17:26:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.014 17:26:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:23.014 ************************************ 00:29:23.014 START TEST rpc_integrity 00:29:23.014 ************************************ 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:29:23.014 { 00:29:23.014 "name": "Malloc0", 00:29:23.014 "aliases": [ 00:29:23.014 "77e322b2-bdbb-449f-87b1-3c77d05e3393" 00:29:23.014 ], 00:29:23.014 "product_name": "Malloc disk", 00:29:23.014 "block_size": 512, 00:29:23.014 "num_blocks": 16384, 00:29:23.014 "uuid": "77e322b2-bdbb-449f-87b1-3c77d05e3393", 00:29:23.014 "assigned_rate_limits": { 00:29:23.014 "rw_ios_per_sec": 0, 00:29:23.014 "rw_mbytes_per_sec": 0, 00:29:23.014 "r_mbytes_per_sec": 0, 00:29:23.014 "w_mbytes_per_sec": 0 00:29:23.014 }, 00:29:23.014 "claimed": false, 00:29:23.014 "zoned": false, 00:29:23.014 "supported_io_types": { 00:29:23.014 "read": true, 00:29:23.014 "write": true, 00:29:23.014 "unmap": true, 00:29:23.014 "flush": true, 00:29:23.014 "reset": true, 00:29:23.014 "nvme_admin": false, 00:29:23.014 "nvme_io": false, 00:29:23.014 "nvme_io_md": false, 00:29:23.014 "write_zeroes": true, 00:29:23.014 "zcopy": true, 00:29:23.014 "get_zone_info": false, 00:29:23.014 "zone_management": false, 00:29:23.014 "zone_append": false, 00:29:23.014 "compare": false, 00:29:23.014 "compare_and_write": false, 00:29:23.014 "abort": true, 00:29:23.014 "seek_hole": false, 00:29:23.014 "seek_data": false, 00:29:23.014 "copy": true, 00:29:23.014 "nvme_iov_md": false 00:29:23.014 }, 00:29:23.014 "memory_domains": [ 00:29:23.014 { 00:29:23.014 "dma_device_id": "system", 00:29:23.014 "dma_device_type": 1 00:29:23.014 }, 00:29:23.014 { 00:29:23.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:23.014 "dma_device_type": 2 00:29:23.014 } 00:29:23.014 ], 00:29:23.014 "driver_specific": {} 00:29:23.014 } 00:29:23.014 ]' 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:23.014 [2024-11-26 17:26:23.630247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:29:23.014 [2024-11-26 17:26:23.630314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:23.014 [2024-11-26 17:26:23.630340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:29:23.014 [2024-11-26 17:26:23.630356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:23.014 [2024-11-26 17:26:23.632926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:23.014 [2024-11-26 17:26:23.632977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:29:23.014 Passthru0 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:23.014 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:29:23.014 { 00:29:23.014 "name": "Malloc0", 00:29:23.014 "aliases": [ 00:29:23.014 "77e322b2-bdbb-449f-87b1-3c77d05e3393" 00:29:23.014 ], 00:29:23.014 "product_name": "Malloc disk", 00:29:23.014 "block_size": 512, 00:29:23.014 "num_blocks": 16384, 00:29:23.014 "uuid": "77e322b2-bdbb-449f-87b1-3c77d05e3393", 00:29:23.014 "assigned_rate_limits": { 00:29:23.014 "rw_ios_per_sec": 0, 00:29:23.014 "rw_mbytes_per_sec": 0, 00:29:23.014 "r_mbytes_per_sec": 0, 00:29:23.014 "w_mbytes_per_sec": 0 00:29:23.014 }, 00:29:23.014 "claimed": true, 00:29:23.014 "claim_type": "exclusive_write", 00:29:23.014 "zoned": false, 00:29:23.014 "supported_io_types": { 00:29:23.014 "read": true, 00:29:23.014 "write": true, 00:29:23.014 "unmap": true, 00:29:23.014 "flush": true, 00:29:23.014 "reset": true, 00:29:23.014 "nvme_admin": false, 00:29:23.014 "nvme_io": false, 00:29:23.014 "nvme_io_md": false, 00:29:23.014 "write_zeroes": true, 00:29:23.014 "zcopy": true, 00:29:23.014 "get_zone_info": false, 00:29:23.014 "zone_management": false, 00:29:23.014 "zone_append": false, 00:29:23.014 "compare": false, 00:29:23.014 "compare_and_write": false, 00:29:23.014 "abort": true, 00:29:23.014 "seek_hole": false, 00:29:23.014 "seek_data": false, 00:29:23.014 "copy": true, 00:29:23.014 "nvme_iov_md": false 00:29:23.014 }, 00:29:23.014 "memory_domains": [ 00:29:23.014 { 00:29:23.014 "dma_device_id": "system", 00:29:23.014 "dma_device_type": 1 00:29:23.014 }, 00:29:23.014 { 00:29:23.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:23.014 "dma_device_type": 2 00:29:23.014 } 00:29:23.014 ], 00:29:23.014 "driver_specific": {} 00:29:23.014 }, 00:29:23.014 { 00:29:23.014 "name": "Passthru0", 00:29:23.014 "aliases": [ 00:29:23.014 "30f87937-7068-52d1-87db-adaaeb3ee046" 00:29:23.014 ], 00:29:23.014 "product_name": "passthru", 00:29:23.014 "block_size": 512, 00:29:23.014 "num_blocks": 16384, 00:29:23.014 "uuid": "30f87937-7068-52d1-87db-adaaeb3ee046", 00:29:23.014 "assigned_rate_limits": { 00:29:23.014 "rw_ios_per_sec": 0, 00:29:23.014 "rw_mbytes_per_sec": 0, 00:29:23.014 "r_mbytes_per_sec": 0, 00:29:23.014 "w_mbytes_per_sec": 0 00:29:23.014 }, 00:29:23.014 "claimed": false, 00:29:23.014 "zoned": false, 00:29:23.014 "supported_io_types": { 00:29:23.014 "read": true, 00:29:23.014 "write": true, 00:29:23.014 "unmap": true, 00:29:23.014 "flush": true, 00:29:23.014 "reset": true, 00:29:23.014 "nvme_admin": false, 00:29:23.014 "nvme_io": false, 00:29:23.014 "nvme_io_md": false, 00:29:23.014 "write_zeroes": true, 00:29:23.014 "zcopy": true, 00:29:23.014 "get_zone_info": false, 00:29:23.014 "zone_management": false, 00:29:23.014 "zone_append": false, 00:29:23.014 "compare": false, 00:29:23.014 "compare_and_write": false, 00:29:23.014 "abort": true, 00:29:23.014 "seek_hole": false, 00:29:23.014 "seek_data": false, 00:29:23.014 "copy": true, 00:29:23.014 "nvme_iov_md": false 00:29:23.014 }, 00:29:23.014 "memory_domains": [ 00:29:23.014 { 00:29:23.014 "dma_device_id": "system", 00:29:23.014 "dma_device_type": 1 00:29:23.014 }, 00:29:23.014 { 00:29:23.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:23.014 "dma_device_type": 2 00:29:23.014 } 00:29:23.014 ], 00:29:23.014 "driver_specific": { 00:29:23.014 "passthru": { 00:29:23.014 "name": "Passthru0", 00:29:23.014 "base_bdev_name": "Malloc0" 00:29:23.014 } 00:29:23.014 } 00:29:23.014 } 00:29:23.014 ]' 00:29:23.014 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:29:23.275 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:29:23.275 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.275 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.275 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.275 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:29:23.275 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:29:23.275 17:26:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:29:23.275 00:29:23.275 real 0m0.345s 00:29:23.275 user 0m0.183s 00:29:23.275 sys 0m0.044s 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.275 17:26:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:23.275 ************************************ 00:29:23.275 END TEST rpc_integrity 00:29:23.275 ************************************ 00:29:23.275 17:26:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:29:23.275 17:26:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:23.275 17:26:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.275 17:26:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:23.275 ************************************ 00:29:23.275 START TEST rpc_plugins 00:29:23.275 ************************************ 00:29:23.275 17:26:23 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:29:23.275 17:26:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:29:23.275 17:26:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.275 17:26:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:29:23.275 17:26:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.275 17:26:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:29:23.275 17:26:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:29:23.275 17:26:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.275 17:26:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:29:23.275 17:26:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.275 17:26:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:29:23.275 { 00:29:23.275 "name": "Malloc1", 00:29:23.275 "aliases": [ 00:29:23.275 "dc1fe88d-ed74-448c-bc04-a5190fb4a33f" 00:29:23.275 ], 00:29:23.275 "product_name": "Malloc disk", 00:29:23.275 "block_size": 4096, 00:29:23.275 "num_blocks": 256, 00:29:23.275 "uuid": "dc1fe88d-ed74-448c-bc04-a5190fb4a33f", 00:29:23.275 "assigned_rate_limits": { 00:29:23.275 "rw_ios_per_sec": 0, 00:29:23.275 "rw_mbytes_per_sec": 0, 00:29:23.275 "r_mbytes_per_sec": 0, 00:29:23.275 "w_mbytes_per_sec": 0 00:29:23.275 }, 00:29:23.275 "claimed": false, 00:29:23.275 "zoned": false, 00:29:23.275 "supported_io_types": { 00:29:23.275 "read": true, 00:29:23.275 "write": true, 00:29:23.275 "unmap": true, 00:29:23.275 "flush": true, 00:29:23.275 "reset": true, 00:29:23.275 "nvme_admin": false, 00:29:23.275 "nvme_io": false, 00:29:23.275 "nvme_io_md": false, 00:29:23.275 "write_zeroes": true, 00:29:23.275 "zcopy": true, 00:29:23.275 "get_zone_info": false, 00:29:23.275 "zone_management": false, 00:29:23.275 "zone_append": false, 00:29:23.275 "compare": false, 00:29:23.275 "compare_and_write": false, 00:29:23.275 "abort": true, 00:29:23.275 "seek_hole": false, 00:29:23.275 "seek_data": false, 00:29:23.275 "copy": true, 00:29:23.275 "nvme_iov_md": false 00:29:23.275 }, 00:29:23.275 "memory_domains": [ 00:29:23.275 { 00:29:23.275 "dma_device_id": "system", 00:29:23.275 "dma_device_type": 1 00:29:23.275 }, 00:29:23.275 { 00:29:23.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:23.275 "dma_device_type": 2 00:29:23.275 } 00:29:23.275 ], 00:29:23.275 "driver_specific": {} 00:29:23.275 } 00:29:23.275 ]' 00:29:23.275 17:26:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:29:23.535 17:26:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:29:23.535 17:26:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:29:23.535 17:26:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.535 17:26:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:29:23.535 17:26:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.535 17:26:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:29:23.535 17:26:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.535 17:26:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:29:23.535 17:26:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.535 17:26:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:29:23.535 17:26:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:29:23.535 17:26:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:29:23.535 00:29:23.535 real 0m0.174s 00:29:23.535 user 0m0.099s 00:29:23.535 sys 0m0.032s 00:29:23.535 17:26:24 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.535 17:26:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:29:23.535 ************************************ 00:29:23.536 END TEST rpc_plugins 00:29:23.536 ************************************ 00:29:23.536 17:26:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:29:23.536 17:26:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:23.536 17:26:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.536 17:26:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:23.536 ************************************ 00:29:23.536 START TEST rpc_trace_cmd_test 00:29:23.536 ************************************ 00:29:23.536 17:26:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:29:23.536 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:29:23.536 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:29:23.536 17:26:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.536 17:26:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.536 17:26:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.536 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:29:23.536 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56972", 00:29:23.536 "tpoint_group_mask": "0x8", 00:29:23.536 "iscsi_conn": { 00:29:23.536 "mask": "0x2", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "scsi": { 00:29:23.536 "mask": "0x4", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "bdev": { 00:29:23.536 "mask": "0x8", 00:29:23.536 "tpoint_mask": "0xffffffffffffffff" 00:29:23.536 }, 00:29:23.536 "nvmf_rdma": { 00:29:23.536 "mask": "0x10", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "nvmf_tcp": { 00:29:23.536 "mask": "0x20", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "ftl": { 00:29:23.536 "mask": "0x40", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "blobfs": { 00:29:23.536 "mask": "0x80", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "dsa": { 00:29:23.536 "mask": "0x200", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "thread": { 00:29:23.536 "mask": "0x400", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "nvme_pcie": { 00:29:23.536 "mask": "0x800", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "iaa": { 00:29:23.536 "mask": "0x1000", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "nvme_tcp": { 00:29:23.536 "mask": "0x2000", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "bdev_nvme": { 00:29:23.536 "mask": "0x4000", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "sock": { 00:29:23.536 "mask": "0x8000", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "blob": { 00:29:23.536 "mask": "0x10000", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "bdev_raid": { 00:29:23.536 "mask": "0x20000", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 }, 00:29:23.536 "scheduler": { 00:29:23.536 "mask": "0x40000", 00:29:23.536 "tpoint_mask": "0x0" 00:29:23.536 } 00:29:23.536 }' 00:29:23.536 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:29:23.536 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:29:23.536 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:29:23.796 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:29:23.796 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:29:23.796 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:29:23.796 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:29:23.796 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:29:23.796 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:29:23.796 17:26:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:29:23.796 00:29:23.796 real 0m0.246s 00:29:23.796 user 0m0.199s 00:29:23.796 sys 0m0.038s 00:29:23.796 17:26:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.796 17:26:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.796 ************************************ 00:29:23.796 END TEST rpc_trace_cmd_test 00:29:23.796 ************************************ 00:29:23.796 17:26:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:29:23.796 17:26:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:29:23.796 17:26:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:29:23.796 17:26:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:23.796 17:26:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.796 17:26:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:23.796 ************************************ 00:29:23.796 START TEST rpc_daemon_integrity 00:29:23.796 ************************************ 00:29:23.796 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:29:23.796 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:23.796 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.796 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:23.796 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.796 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:29:23.796 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:29:24.056 { 00:29:24.056 "name": "Malloc2", 00:29:24.056 "aliases": [ 00:29:24.056 "f5b6df06-86d3-4d81-a76c-22ff47e8d50b" 00:29:24.056 ], 00:29:24.056 "product_name": "Malloc disk", 00:29:24.056 "block_size": 512, 00:29:24.056 "num_blocks": 16384, 00:29:24.056 "uuid": "f5b6df06-86d3-4d81-a76c-22ff47e8d50b", 00:29:24.056 "assigned_rate_limits": { 00:29:24.056 "rw_ios_per_sec": 0, 00:29:24.056 "rw_mbytes_per_sec": 0, 00:29:24.056 "r_mbytes_per_sec": 0, 00:29:24.056 "w_mbytes_per_sec": 0 00:29:24.056 }, 00:29:24.056 "claimed": false, 00:29:24.056 "zoned": false, 00:29:24.056 "supported_io_types": { 00:29:24.056 "read": true, 00:29:24.056 "write": true, 00:29:24.056 "unmap": true, 00:29:24.056 "flush": true, 00:29:24.056 "reset": true, 00:29:24.056 "nvme_admin": false, 00:29:24.056 "nvme_io": false, 00:29:24.056 "nvme_io_md": false, 00:29:24.056 "write_zeroes": true, 00:29:24.056 "zcopy": true, 00:29:24.056 "get_zone_info": false, 00:29:24.056 "zone_management": false, 00:29:24.056 "zone_append": false, 00:29:24.056 "compare": false, 00:29:24.056 "compare_and_write": false, 00:29:24.056 "abort": true, 00:29:24.056 "seek_hole": false, 00:29:24.056 "seek_data": false, 00:29:24.056 "copy": true, 00:29:24.056 "nvme_iov_md": false 00:29:24.056 }, 00:29:24.056 "memory_domains": [ 00:29:24.056 { 00:29:24.056 "dma_device_id": "system", 00:29:24.056 "dma_device_type": 1 00:29:24.056 }, 00:29:24.056 { 00:29:24.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.056 "dma_device_type": 2 00:29:24.056 } 00:29:24.056 ], 00:29:24.056 "driver_specific": {} 00:29:24.056 } 00:29:24.056 ]' 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:24.056 [2024-11-26 17:26:24.585885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:29:24.056 [2024-11-26 17:26:24.585976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:24.056 [2024-11-26 17:26:24.585997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:24.056 [2024-11-26 17:26:24.586011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:24.056 [2024-11-26 17:26:24.588452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:24.056 [2024-11-26 17:26:24.588499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:29:24.056 Passthru0 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.056 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:29:24.056 { 00:29:24.056 "name": "Malloc2", 00:29:24.056 "aliases": [ 00:29:24.056 "f5b6df06-86d3-4d81-a76c-22ff47e8d50b" 00:29:24.056 ], 00:29:24.056 "product_name": "Malloc disk", 00:29:24.056 "block_size": 512, 00:29:24.056 "num_blocks": 16384, 00:29:24.056 "uuid": "f5b6df06-86d3-4d81-a76c-22ff47e8d50b", 00:29:24.056 "assigned_rate_limits": { 00:29:24.056 "rw_ios_per_sec": 0, 00:29:24.056 "rw_mbytes_per_sec": 0, 00:29:24.056 "r_mbytes_per_sec": 0, 00:29:24.056 "w_mbytes_per_sec": 0 00:29:24.056 }, 00:29:24.056 "claimed": true, 00:29:24.056 "claim_type": "exclusive_write", 00:29:24.056 "zoned": false, 00:29:24.056 "supported_io_types": { 00:29:24.056 "read": true, 00:29:24.056 "write": true, 00:29:24.056 "unmap": true, 00:29:24.056 "flush": true, 00:29:24.056 "reset": true, 00:29:24.056 "nvme_admin": false, 00:29:24.056 "nvme_io": false, 00:29:24.056 "nvme_io_md": false, 00:29:24.056 "write_zeroes": true, 00:29:24.056 "zcopy": true, 00:29:24.056 "get_zone_info": false, 00:29:24.056 "zone_management": false, 00:29:24.056 "zone_append": false, 00:29:24.056 "compare": false, 00:29:24.056 "compare_and_write": false, 00:29:24.056 "abort": true, 00:29:24.056 "seek_hole": false, 00:29:24.056 "seek_data": false, 00:29:24.056 "copy": true, 00:29:24.056 "nvme_iov_md": false 00:29:24.056 }, 00:29:24.057 "memory_domains": [ 00:29:24.057 { 00:29:24.057 "dma_device_id": "system", 00:29:24.057 "dma_device_type": 1 00:29:24.057 }, 00:29:24.057 { 00:29:24.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.057 "dma_device_type": 2 00:29:24.057 } 00:29:24.057 ], 00:29:24.057 "driver_specific": {} 00:29:24.057 }, 00:29:24.057 { 00:29:24.057 "name": "Passthru0", 00:29:24.057 "aliases": [ 00:29:24.057 "48442123-b207-58f1-a641-b4341bfbe4a5" 00:29:24.057 ], 00:29:24.057 "product_name": "passthru", 00:29:24.057 "block_size": 512, 00:29:24.057 "num_blocks": 16384, 00:29:24.057 "uuid": "48442123-b207-58f1-a641-b4341bfbe4a5", 00:29:24.057 "assigned_rate_limits": { 00:29:24.057 "rw_ios_per_sec": 0, 00:29:24.057 "rw_mbytes_per_sec": 0, 00:29:24.057 "r_mbytes_per_sec": 0, 00:29:24.057 "w_mbytes_per_sec": 0 00:29:24.057 }, 00:29:24.057 "claimed": false, 00:29:24.057 "zoned": false, 00:29:24.057 "supported_io_types": { 00:29:24.057 "read": true, 00:29:24.057 "write": true, 00:29:24.057 "unmap": true, 00:29:24.057 "flush": true, 00:29:24.057 "reset": true, 00:29:24.057 "nvme_admin": false, 00:29:24.057 "nvme_io": false, 00:29:24.057 "nvme_io_md": false, 00:29:24.057 "write_zeroes": true, 00:29:24.057 "zcopy": true, 00:29:24.057 "get_zone_info": false, 00:29:24.057 "zone_management": false, 00:29:24.057 "zone_append": false, 00:29:24.057 "compare": false, 00:29:24.057 "compare_and_write": false, 00:29:24.057 "abort": true, 00:29:24.057 "seek_hole": false, 00:29:24.057 "seek_data": false, 00:29:24.057 "copy": true, 00:29:24.057 "nvme_iov_md": false 00:29:24.057 }, 00:29:24.057 "memory_domains": [ 00:29:24.057 { 00:29:24.057 "dma_device_id": "system", 00:29:24.057 "dma_device_type": 1 00:29:24.057 }, 00:29:24.057 { 00:29:24.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.057 "dma_device_type": 2 00:29:24.057 } 00:29:24.057 ], 00:29:24.057 "driver_specific": { 00:29:24.057 "passthru": { 00:29:24.057 "name": "Passthru0", 00:29:24.057 "base_bdev_name": "Malloc2" 00:29:24.057 } 00:29:24.057 } 00:29:24.057 } 00:29:24.057 ]' 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:29:24.057 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:29:24.316 17:26:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:29:24.316 00:29:24.316 real 0m0.313s 00:29:24.316 user 0m0.164s 00:29:24.316 sys 0m0.046s 00:29:24.316 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.316 17:26:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:29:24.317 ************************************ 00:29:24.317 END TEST rpc_daemon_integrity 00:29:24.317 ************************************ 00:29:24.317 17:26:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:24.317 17:26:24 rpc -- rpc/rpc.sh@84 -- # killprocess 56972 00:29:24.317 17:26:24 rpc -- common/autotest_common.sh@954 -- # '[' -z 56972 ']' 00:29:24.317 17:26:24 rpc -- common/autotest_common.sh@958 -- # kill -0 56972 00:29:24.317 17:26:24 rpc -- common/autotest_common.sh@959 -- # uname 00:29:24.317 17:26:24 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.317 17:26:24 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56972 00:29:24.317 17:26:24 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:24.317 17:26:24 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:24.317 17:26:24 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56972' 00:29:24.317 killing process with pid 56972 00:29:24.317 17:26:24 rpc -- common/autotest_common.sh@973 -- # kill 56972 00:29:24.317 17:26:24 rpc -- common/autotest_common.sh@978 -- # wait 56972 00:29:27.606 ************************************ 00:29:27.606 END TEST rpc 00:29:27.606 ************************************ 00:29:27.606 00:29:27.606 real 0m5.693s 00:29:27.606 user 0m6.210s 00:29:27.606 sys 0m0.924s 00:29:27.606 17:26:27 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.606 17:26:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:29:27.606 17:26:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:29:27.606 17:26:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:27.606 17:26:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.606 17:26:27 -- common/autotest_common.sh@10 -- # set +x 00:29:27.606 ************************************ 00:29:27.606 START TEST skip_rpc 00:29:27.606 ************************************ 00:29:27.606 17:26:27 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:29:27.606 * Looking for test storage... 00:29:27.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:29:27.606 17:26:27 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:27.606 17:26:27 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:27.606 17:26:27 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:27.606 17:26:27 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.606 17:26:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:29:27.607 17:26:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:29:27.607 17:26:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.607 17:26:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:29:27.607 17:26:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.607 17:26:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.607 17:26:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.607 17:26:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:29:27.607 17:26:27 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.607 17:26:27 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:27.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.607 --rc genhtml_branch_coverage=1 00:29:27.607 --rc genhtml_function_coverage=1 00:29:27.607 --rc genhtml_legend=1 00:29:27.607 --rc geninfo_all_blocks=1 00:29:27.607 --rc geninfo_unexecuted_blocks=1 00:29:27.607 00:29:27.607 ' 00:29:27.607 17:26:27 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:27.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.607 --rc genhtml_branch_coverage=1 00:29:27.607 --rc genhtml_function_coverage=1 00:29:27.607 --rc genhtml_legend=1 00:29:27.607 --rc geninfo_all_blocks=1 00:29:27.607 --rc geninfo_unexecuted_blocks=1 00:29:27.607 00:29:27.607 ' 00:29:27.607 17:26:27 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:27.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.607 --rc genhtml_branch_coverage=1 00:29:27.607 --rc genhtml_function_coverage=1 00:29:27.607 --rc genhtml_legend=1 00:29:27.607 --rc geninfo_all_blocks=1 00:29:27.607 --rc geninfo_unexecuted_blocks=1 00:29:27.607 00:29:27.607 ' 00:29:27.607 17:26:27 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:27.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.607 --rc genhtml_branch_coverage=1 00:29:27.607 --rc genhtml_function_coverage=1 00:29:27.607 --rc genhtml_legend=1 00:29:27.607 --rc geninfo_all_blocks=1 00:29:27.607 --rc geninfo_unexecuted_blocks=1 00:29:27.607 00:29:27.607 ' 00:29:27.607 17:26:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:27.607 17:26:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:29:27.607 17:26:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:29:27.607 17:26:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:27.607 17:26:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.607 17:26:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:27.607 ************************************ 00:29:27.607 START TEST skip_rpc 00:29:27.607 ************************************ 00:29:27.607 17:26:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:29:27.607 17:26:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:29:27.607 17:26:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57201 00:29:27.607 17:26:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:27.607 17:26:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:29:27.607 [2024-11-26 17:26:27.960995] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:29:27.607 [2024-11-26 17:26:27.961185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57201 ] 00:29:27.607 [2024-11-26 17:26:28.136102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.607 [2024-11-26 17:26:28.277007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57201 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57201 ']' 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57201 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57201 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.922 killing process with pid 57201 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57201' 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57201 00:29:32.922 17:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57201 00:29:35.450 00:29:35.450 real 0m7.939s 00:29:35.450 user 0m7.416s 00:29:35.450 sys 0m0.425s 00:29:35.450 17:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.450 ************************************ 00:29:35.450 END TEST skip_rpc 00:29:35.450 ************************************ 00:29:35.450 17:26:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:35.450 17:26:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:29:35.450 17:26:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:35.450 17:26:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.450 17:26:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:35.450 ************************************ 00:29:35.450 START TEST skip_rpc_with_json 00:29:35.450 ************************************ 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57316 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57316 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57316 ']' 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.450 17:26:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:35.450 [2024-11-26 17:26:35.931607] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:29:35.450 [2024-11-26 17:26:35.931752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57316 ] 00:29:35.450 [2024-11-26 17:26:36.101256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.708 [2024-11-26 17:26:36.238822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:36.642 [2024-11-26 17:26:37.208052] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:29:36.642 request: 00:29:36.642 { 00:29:36.642 "trtype": "tcp", 00:29:36.642 "method": "nvmf_get_transports", 00:29:36.642 "req_id": 1 00:29:36.642 } 00:29:36.642 Got JSON-RPC error response 00:29:36.642 response: 00:29:36.642 { 00:29:36.642 "code": -19, 00:29:36.642 "message": "No such device" 00:29:36.642 } 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:36.642 [2024-11-26 17:26:37.216184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.642 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:36.900 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.900 17:26:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:36.901 { 00:29:36.901 "subsystems": [ 00:29:36.901 { 00:29:36.901 "subsystem": "fsdev", 00:29:36.901 "config": [ 00:29:36.901 { 00:29:36.901 "method": "fsdev_set_opts", 00:29:36.901 "params": { 00:29:36.901 "fsdev_io_pool_size": 65535, 00:29:36.901 "fsdev_io_cache_size": 256 00:29:36.901 } 00:29:36.901 } 00:29:36.901 ] 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "subsystem": "keyring", 00:29:36.901 "config": [] 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "subsystem": "iobuf", 00:29:36.901 "config": [ 00:29:36.901 { 00:29:36.901 "method": "iobuf_set_options", 00:29:36.901 "params": { 00:29:36.901 "small_pool_count": 8192, 00:29:36.901 "large_pool_count": 1024, 00:29:36.901 "small_bufsize": 8192, 00:29:36.901 "large_bufsize": 135168, 00:29:36.901 "enable_numa": false 00:29:36.901 } 00:29:36.901 } 00:29:36.901 ] 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "subsystem": "sock", 00:29:36.901 "config": [ 00:29:36.901 { 00:29:36.901 "method": "sock_set_default_impl", 00:29:36.901 "params": { 00:29:36.901 "impl_name": "posix" 00:29:36.901 } 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "method": "sock_impl_set_options", 00:29:36.901 "params": { 00:29:36.901 "impl_name": "ssl", 00:29:36.901 "recv_buf_size": 4096, 00:29:36.901 "send_buf_size": 4096, 00:29:36.901 "enable_recv_pipe": true, 00:29:36.901 "enable_quickack": false, 00:29:36.901 "enable_placement_id": 0, 00:29:36.901 "enable_zerocopy_send_server": true, 00:29:36.901 "enable_zerocopy_send_client": false, 00:29:36.901 "zerocopy_threshold": 0, 00:29:36.901 "tls_version": 0, 00:29:36.901 "enable_ktls": false 00:29:36.901 } 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "method": "sock_impl_set_options", 00:29:36.901 "params": { 00:29:36.901 "impl_name": "posix", 00:29:36.901 "recv_buf_size": 2097152, 00:29:36.901 "send_buf_size": 2097152, 00:29:36.901 "enable_recv_pipe": true, 00:29:36.901 "enable_quickack": false, 00:29:36.901 "enable_placement_id": 0, 00:29:36.901 "enable_zerocopy_send_server": true, 00:29:36.901 "enable_zerocopy_send_client": false, 00:29:36.901 "zerocopy_threshold": 0, 00:29:36.901 "tls_version": 0, 00:29:36.901 "enable_ktls": false 00:29:36.901 } 00:29:36.901 } 00:29:36.901 ] 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "subsystem": "vmd", 00:29:36.901 "config": [] 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "subsystem": "accel", 00:29:36.901 "config": [ 00:29:36.901 { 00:29:36.901 "method": "accel_set_options", 00:29:36.901 "params": { 00:29:36.901 "small_cache_size": 128, 00:29:36.901 "large_cache_size": 16, 00:29:36.901 "task_count": 2048, 00:29:36.901 "sequence_count": 2048, 00:29:36.901 "buf_count": 2048 00:29:36.901 } 00:29:36.901 } 00:29:36.901 ] 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "subsystem": "bdev", 00:29:36.901 "config": [ 00:29:36.901 { 00:29:36.901 "method": "bdev_set_options", 00:29:36.901 "params": { 00:29:36.901 "bdev_io_pool_size": 65535, 00:29:36.901 "bdev_io_cache_size": 256, 00:29:36.901 "bdev_auto_examine": true, 00:29:36.901 "iobuf_small_cache_size": 128, 00:29:36.901 "iobuf_large_cache_size": 16 00:29:36.901 } 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "method": "bdev_raid_set_options", 00:29:36.901 "params": { 00:29:36.901 "process_window_size_kb": 1024, 00:29:36.901 "process_max_bandwidth_mb_sec": 0 00:29:36.901 } 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "method": "bdev_iscsi_set_options", 00:29:36.901 "params": { 00:29:36.901 "timeout_sec": 30 00:29:36.901 } 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "method": "bdev_nvme_set_options", 00:29:36.901 "params": { 00:29:36.901 "action_on_timeout": "none", 00:29:36.901 "timeout_us": 0, 00:29:36.901 "timeout_admin_us": 0, 00:29:36.901 "keep_alive_timeout_ms": 10000, 00:29:36.901 "arbitration_burst": 0, 00:29:36.901 "low_priority_weight": 0, 00:29:36.901 "medium_priority_weight": 0, 00:29:36.901 "high_priority_weight": 0, 00:29:36.901 "nvme_adminq_poll_period_us": 10000, 00:29:36.901 "nvme_ioq_poll_period_us": 0, 00:29:36.901 "io_queue_requests": 0, 00:29:36.901 "delay_cmd_submit": true, 00:29:36.901 "transport_retry_count": 4, 00:29:36.901 "bdev_retry_count": 3, 00:29:36.901 "transport_ack_timeout": 0, 00:29:36.901 "ctrlr_loss_timeout_sec": 0, 00:29:36.901 "reconnect_delay_sec": 0, 00:29:36.901 "fast_io_fail_timeout_sec": 0, 00:29:36.901 "disable_auto_failback": false, 00:29:36.901 "generate_uuids": false, 00:29:36.901 "transport_tos": 0, 00:29:36.901 "nvme_error_stat": false, 00:29:36.901 "rdma_srq_size": 0, 00:29:36.901 "io_path_stat": false, 00:29:36.901 "allow_accel_sequence": false, 00:29:36.901 "rdma_max_cq_size": 0, 00:29:36.901 "rdma_cm_event_timeout_ms": 0, 00:29:36.901 "dhchap_digests": [ 00:29:36.901 "sha256", 00:29:36.901 "sha384", 00:29:36.901 "sha512" 00:29:36.901 ], 00:29:36.901 "dhchap_dhgroups": [ 00:29:36.901 "null", 00:29:36.901 "ffdhe2048", 00:29:36.901 "ffdhe3072", 00:29:36.901 "ffdhe4096", 00:29:36.901 "ffdhe6144", 00:29:36.901 "ffdhe8192" 00:29:36.901 ] 00:29:36.901 } 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "method": "bdev_nvme_set_hotplug", 00:29:36.901 "params": { 00:29:36.901 "period_us": 100000, 00:29:36.901 "enable": false 00:29:36.901 } 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "method": "bdev_wait_for_examine" 00:29:36.901 } 00:29:36.901 ] 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "subsystem": "scsi", 00:29:36.901 "config": null 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "subsystem": "scheduler", 00:29:36.901 "config": [ 00:29:36.901 { 00:29:36.901 "method": "framework_set_scheduler", 00:29:36.901 "params": { 00:29:36.901 "name": "static" 00:29:36.901 } 00:29:36.901 } 00:29:36.901 ] 00:29:36.901 }, 00:29:36.901 { 00:29:36.901 "subsystem": "vhost_scsi", 00:29:36.902 "config": [] 00:29:36.902 }, 00:29:36.902 { 00:29:36.902 "subsystem": "vhost_blk", 00:29:36.902 "config": [] 00:29:36.902 }, 00:29:36.902 { 00:29:36.902 "subsystem": "ublk", 00:29:36.902 "config": [] 00:29:36.902 }, 00:29:36.902 { 00:29:36.902 "subsystem": "nbd", 00:29:36.902 "config": [] 00:29:36.902 }, 00:29:36.902 { 00:29:36.902 "subsystem": "nvmf", 00:29:36.902 "config": [ 00:29:36.902 { 00:29:36.902 "method": "nvmf_set_config", 00:29:36.902 "params": { 00:29:36.902 "discovery_filter": "match_any", 00:29:36.902 "admin_cmd_passthru": { 00:29:36.902 "identify_ctrlr": false 00:29:36.902 }, 00:29:36.902 "dhchap_digests": [ 00:29:36.902 "sha256", 00:29:36.902 "sha384", 00:29:36.902 "sha512" 00:29:36.902 ], 00:29:36.902 "dhchap_dhgroups": [ 00:29:36.902 "null", 00:29:36.902 "ffdhe2048", 00:29:36.902 "ffdhe3072", 00:29:36.902 "ffdhe4096", 00:29:36.902 "ffdhe6144", 00:29:36.902 "ffdhe8192" 00:29:36.902 ] 00:29:36.902 } 00:29:36.902 }, 00:29:36.902 { 00:29:36.902 "method": "nvmf_set_max_subsystems", 00:29:36.902 "params": { 00:29:36.902 "max_subsystems": 1024 00:29:36.902 } 00:29:36.902 }, 00:29:36.902 { 00:29:36.902 "method": "nvmf_set_crdt", 00:29:36.902 "params": { 00:29:36.902 "crdt1": 0, 00:29:36.902 "crdt2": 0, 00:29:36.902 "crdt3": 0 00:29:36.902 } 00:29:36.902 }, 00:29:36.902 { 00:29:36.902 "method": "nvmf_create_transport", 00:29:36.902 "params": { 00:29:36.902 "trtype": "TCP", 00:29:36.902 "max_queue_depth": 128, 00:29:36.902 "max_io_qpairs_per_ctrlr": 127, 00:29:36.902 "in_capsule_data_size": 4096, 00:29:36.902 "max_io_size": 131072, 00:29:36.902 "io_unit_size": 131072, 00:29:36.902 "max_aq_depth": 128, 00:29:36.902 "num_shared_buffers": 511, 00:29:36.902 "buf_cache_size": 4294967295, 00:29:36.902 "dif_insert_or_strip": false, 00:29:36.902 "zcopy": false, 00:29:36.902 "c2h_success": true, 00:29:36.902 "sock_priority": 0, 00:29:36.902 "abort_timeout_sec": 1, 00:29:36.902 "ack_timeout": 0, 00:29:36.902 "data_wr_pool_size": 0 00:29:36.902 } 00:29:36.902 } 00:29:36.902 ] 00:29:36.902 }, 00:29:36.902 { 00:29:36.902 "subsystem": "iscsi", 00:29:36.902 "config": [ 00:29:36.902 { 00:29:36.902 "method": "iscsi_set_options", 00:29:36.902 "params": { 00:29:36.902 "node_base": "iqn.2016-06.io.spdk", 00:29:36.902 "max_sessions": 128, 00:29:36.902 "max_connections_per_session": 2, 00:29:36.902 "max_queue_depth": 64, 00:29:36.902 "default_time2wait": 2, 00:29:36.902 "default_time2retain": 20, 00:29:36.902 "first_burst_length": 8192, 00:29:36.902 "immediate_data": true, 00:29:36.902 "allow_duplicated_isid": false, 00:29:36.902 "error_recovery_level": 0, 00:29:36.902 "nop_timeout": 60, 00:29:36.902 "nop_in_interval": 30, 00:29:36.902 "disable_chap": false, 00:29:36.902 "require_chap": false, 00:29:36.902 "mutual_chap": false, 00:29:36.902 "chap_group": 0, 00:29:36.902 "max_large_datain_per_connection": 64, 00:29:36.902 "max_r2t_per_connection": 4, 00:29:36.902 "pdu_pool_size": 36864, 00:29:36.902 "immediate_data_pool_size": 16384, 00:29:36.902 "data_out_pool_size": 2048 00:29:36.902 } 00:29:36.902 } 00:29:36.902 ] 00:29:36.902 } 00:29:36.902 ] 00:29:36.902 } 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57316 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57316 ']' 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57316 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57316 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:36.902 killing process with pid 57316 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57316' 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57316 00:29:36.902 17:26:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57316 00:29:40.183 17:26:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57372 00:29:40.183 17:26:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:40.183 17:26:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57372 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57372 ']' 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57372 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57372 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:45.446 killing process with pid 57372 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57372' 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57372 00:29:45.446 17:26:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57372 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:29:47.977 00:29:47.977 real 0m12.446s 00:29:47.977 user 0m11.899s 00:29:47.977 sys 0m0.891s 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:29:47.977 ************************************ 00:29:47.977 END TEST skip_rpc_with_json 00:29:47.977 ************************************ 00:29:47.977 17:26:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:29:47.977 17:26:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:47.977 17:26:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.977 17:26:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:47.977 ************************************ 00:29:47.977 START TEST skip_rpc_with_delay 00:29:47.977 ************************************ 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:29:47.977 [2024-11-26 17:26:48.438064] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:47.977 00:29:47.977 real 0m0.193s 00:29:47.977 user 0m0.111s 00:29:47.977 sys 0m0.081s 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.977 17:26:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:29:47.977 ************************************ 00:29:47.977 END TEST skip_rpc_with_delay 00:29:47.977 ************************************ 00:29:47.977 17:26:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:29:47.977 17:26:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:29:47.977 17:26:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:29:47.977 17:26:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:47.977 17:26:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.977 17:26:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:47.977 ************************************ 00:29:47.977 START TEST exit_on_failed_rpc_init 00:29:47.977 ************************************ 00:29:47.977 17:26:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:29:47.977 17:26:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57517 00:29:47.977 17:26:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57517 00:29:47.977 17:26:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57517 ']' 00:29:47.977 17:26:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.977 17:26:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.977 17:26:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.977 17:26:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:47.977 17:26:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.977 17:26:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.235 [2024-11-26 17:26:48.688160] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:29:48.235 [2024-11-26 17:26:48.688298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57517 ] 00:29:48.235 [2024-11-26 17:26:48.870469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.494 [2024-11-26 17:26:49.014447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:29:49.431 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:29:49.690 [2024-11-26 17:26:50.159730] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:29:49.690 [2024-11-26 17:26:50.159881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57540 ] 00:29:49.690 [2024-11-26 17:26:50.326493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.949 [2024-11-26 17:26:50.470522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.949 [2024-11-26 17:26:50.470630] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:29:49.949 [2024-11-26 17:26:50.470646] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:29:49.949 [2024-11-26 17:26:50.470659] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57517 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57517 ']' 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57517 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57517 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:50.207 killing process with pid 57517 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57517' 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57517 00:29:50.207 17:26:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57517 00:29:53.492 00:29:53.492 real 0m5.097s 00:29:53.492 user 0m5.510s 00:29:53.492 sys 0m0.626s 00:29:53.492 17:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.492 17:26:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:29:53.492 ************************************ 00:29:53.492 END TEST exit_on_failed_rpc_init 00:29:53.492 ************************************ 00:29:53.492 17:26:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:29:53.492 00:29:53.492 real 0m26.098s 00:29:53.493 user 0m25.121s 00:29:53.493 sys 0m2.262s 00:29:53.493 17:26:53 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.493 17:26:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:53.493 ************************************ 00:29:53.493 END TEST skip_rpc 00:29:53.493 ************************************ 00:29:53.493 17:26:53 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:29:53.493 17:26:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:53.493 17:26:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.493 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:29:53.493 ************************************ 00:29:53.493 START TEST rpc_client 00:29:53.493 ************************************ 00:29:53.493 17:26:53 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:29:53.493 * Looking for test storage... 00:29:53.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:29:53.493 17:26:53 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:53.493 17:26:53 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:29:53.493 17:26:53 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:53.493 17:26:53 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@345 -- # : 1 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@353 -- # local d=1 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@355 -- # echo 1 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@353 -- # local d=2 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@355 -- # echo 2 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.493 17:26:53 rpc_client -- scripts/common.sh@368 -- # return 0 00:29:53.493 17:26:53 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.493 17:26:53 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:53.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.493 --rc genhtml_branch_coverage=1 00:29:53.493 --rc genhtml_function_coverage=1 00:29:53.493 --rc genhtml_legend=1 00:29:53.493 --rc geninfo_all_blocks=1 00:29:53.493 --rc geninfo_unexecuted_blocks=1 00:29:53.493 00:29:53.493 ' 00:29:53.493 17:26:53 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:53.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.493 --rc genhtml_branch_coverage=1 00:29:53.493 --rc genhtml_function_coverage=1 00:29:53.493 --rc genhtml_legend=1 00:29:53.493 --rc geninfo_all_blocks=1 00:29:53.493 --rc geninfo_unexecuted_blocks=1 00:29:53.493 00:29:53.493 ' 00:29:53.493 17:26:53 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:53.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.493 --rc genhtml_branch_coverage=1 00:29:53.493 --rc genhtml_function_coverage=1 00:29:53.493 --rc genhtml_legend=1 00:29:53.493 --rc geninfo_all_blocks=1 00:29:53.493 --rc geninfo_unexecuted_blocks=1 00:29:53.493 00:29:53.493 ' 00:29:53.493 17:26:53 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:53.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.493 --rc genhtml_branch_coverage=1 00:29:53.493 --rc genhtml_function_coverage=1 00:29:53.493 --rc genhtml_legend=1 00:29:53.493 --rc geninfo_all_blocks=1 00:29:53.493 --rc geninfo_unexecuted_blocks=1 00:29:53.493 00:29:53.493 ' 00:29:53.493 17:26:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:29:53.493 OK 00:29:53.493 17:26:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:29:53.493 00:29:53.493 real 0m0.279s 00:29:53.493 user 0m0.172s 00:29:53.493 sys 0m0.117s 00:29:53.493 17:26:54 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.493 17:26:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:29:53.493 ************************************ 00:29:53.493 END TEST rpc_client 00:29:53.493 ************************************ 00:29:53.493 17:26:54 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:29:53.493 17:26:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:53.493 17:26:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.493 17:26:54 -- common/autotest_common.sh@10 -- # set +x 00:29:53.493 ************************************ 00:29:53.493 START TEST json_config 00:29:53.493 ************************************ 00:29:53.493 17:26:54 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:29:53.752 17:26:54 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:53.752 17:26:54 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:29:53.752 17:26:54 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:53.752 17:26:54 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:53.752 17:26:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.752 17:26:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.752 17:26:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.752 17:26:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.752 17:26:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.752 17:26:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.752 17:26:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.752 17:26:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.752 17:26:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.752 17:26:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.752 17:26:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.752 17:26:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:29:53.752 17:26:54 json_config -- scripts/common.sh@345 -- # : 1 00:29:53.752 17:26:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.752 17:26:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.752 17:26:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:29:53.752 17:26:54 json_config -- scripts/common.sh@353 -- # local d=1 00:29:53.752 17:26:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.752 17:26:54 json_config -- scripts/common.sh@355 -- # echo 1 00:29:53.752 17:26:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.752 17:26:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:29:53.752 17:26:54 json_config -- scripts/common.sh@353 -- # local d=2 00:29:53.752 17:26:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.752 17:26:54 json_config -- scripts/common.sh@355 -- # echo 2 00:29:53.752 17:26:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.752 17:26:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.752 17:26:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.752 17:26:54 json_config -- scripts/common.sh@368 -- # return 0 00:29:53.752 17:26:54 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.752 17:26:54 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:53.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.752 --rc genhtml_branch_coverage=1 00:29:53.752 --rc genhtml_function_coverage=1 00:29:53.752 --rc genhtml_legend=1 00:29:53.752 --rc geninfo_all_blocks=1 00:29:53.752 --rc geninfo_unexecuted_blocks=1 00:29:53.752 00:29:53.752 ' 00:29:53.752 17:26:54 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:53.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.752 --rc genhtml_branch_coverage=1 00:29:53.752 --rc genhtml_function_coverage=1 00:29:53.752 --rc genhtml_legend=1 00:29:53.752 --rc geninfo_all_blocks=1 00:29:53.752 --rc geninfo_unexecuted_blocks=1 00:29:53.752 00:29:53.752 ' 00:29:53.752 17:26:54 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:53.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.752 --rc genhtml_branch_coverage=1 00:29:53.752 --rc genhtml_function_coverage=1 00:29:53.752 --rc genhtml_legend=1 00:29:53.752 --rc geninfo_all_blocks=1 00:29:53.752 --rc geninfo_unexecuted_blocks=1 00:29:53.752 00:29:53.752 ' 00:29:53.752 17:26:54 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:53.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.752 --rc genhtml_branch_coverage=1 00:29:53.752 --rc genhtml_function_coverage=1 00:29:53.752 --rc genhtml_legend=1 00:29:53.752 --rc geninfo_all_blocks=1 00:29:53.752 --rc geninfo_unexecuted_blocks=1 00:29:53.752 00:29:53.752 ' 00:29:53.752 17:26:54 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b224b750-caac-4cbd-bbde-095c4ddf7e9f 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b224b750-caac-4cbd-bbde-095c4ddf7e9f 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.752 17:26:54 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:53.752 17:26:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.752 17:26:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.752 17:26:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.752 17:26:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.753 17:26:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.753 17:26:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.753 17:26:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.753 17:26:54 json_config -- paths/export.sh@5 -- # export PATH 00:29:53.753 17:26:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.753 17:26:54 json_config -- nvmf/common.sh@51 -- # : 0 00:29:53.753 17:26:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.753 17:26:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.753 17:26:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.753 17:26:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.753 17:26:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.753 17:26:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:53.753 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:53.753 17:26:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.753 17:26:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.753 17:26:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.753 17:26:54 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:29:53.753 17:26:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:29:53.753 17:26:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:29:53.753 17:26:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:29:53.753 17:26:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:29:53.753 WARNING: No tests are enabled so not running JSON configuration tests 00:29:53.753 17:26:54 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:29:53.753 17:26:54 json_config -- json_config/json_config.sh@28 -- # exit 0 00:29:53.753 00:29:53.753 real 0m0.217s 00:29:53.753 user 0m0.143s 00:29:53.753 sys 0m0.084s 00:29:53.753 17:26:54 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.753 17:26:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:53.753 ************************************ 00:29:53.753 END TEST json_config 00:29:53.753 ************************************ 00:29:53.753 17:26:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:29:53.753 17:26:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:53.753 17:26:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.753 17:26:54 -- common/autotest_common.sh@10 -- # set +x 00:29:53.753 ************************************ 00:29:53.753 START TEST json_config_extra_key 00:29:53.753 ************************************ 00:29:53.753 17:26:54 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:29:54.012 17:26:54 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:54.012 17:26:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:29:54.012 17:26:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:54.012 17:26:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:54.012 17:26:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:29:54.012 17:26:54 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.012 17:26:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:54.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.012 --rc genhtml_branch_coverage=1 00:29:54.012 --rc genhtml_function_coverage=1 00:29:54.012 --rc genhtml_legend=1 00:29:54.012 --rc geninfo_all_blocks=1 00:29:54.012 --rc geninfo_unexecuted_blocks=1 00:29:54.012 00:29:54.012 ' 00:29:54.012 17:26:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:54.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.012 --rc genhtml_branch_coverage=1 00:29:54.012 --rc genhtml_function_coverage=1 00:29:54.012 --rc genhtml_legend=1 00:29:54.012 --rc geninfo_all_blocks=1 00:29:54.012 --rc geninfo_unexecuted_blocks=1 00:29:54.012 00:29:54.012 ' 00:29:54.012 17:26:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:54.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.012 --rc genhtml_branch_coverage=1 00:29:54.012 --rc genhtml_function_coverage=1 00:29:54.012 --rc genhtml_legend=1 00:29:54.012 --rc geninfo_all_blocks=1 00:29:54.012 --rc geninfo_unexecuted_blocks=1 00:29:54.012 00:29:54.012 ' 00:29:54.012 17:26:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:54.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.012 --rc genhtml_branch_coverage=1 00:29:54.012 --rc genhtml_function_coverage=1 00:29:54.012 --rc genhtml_legend=1 00:29:54.012 --rc geninfo_all_blocks=1 00:29:54.012 --rc geninfo_unexecuted_blocks=1 00:29:54.012 00:29:54.012 ' 00:29:54.012 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:54.012 17:26:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:29:54.012 17:26:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.012 17:26:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.012 17:26:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.012 17:26:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.012 17:26:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.012 17:26:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b224b750-caac-4cbd-bbde-095c4ddf7e9f 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b224b750-caac-4cbd-bbde-095c4ddf7e9f 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:54.013 17:26:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.013 17:26:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.013 17:26:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.013 17:26:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.013 17:26:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.013 17:26:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.013 17:26:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.013 17:26:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:29:54.013 17:26:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:54.013 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.013 17:26:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:29:54.013 INFO: launching applications... 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:29:54.013 17:26:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57750 00:29:54.013 Waiting for target to run... 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57750 /var/tmp/spdk_tgt.sock 00:29:54.013 17:26:54 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57750 ']' 00:29:54.013 17:26:54 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:29:54.013 17:26:54 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.013 17:26:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:29:54.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:29:54.013 17:26:54 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:29:54.013 17:26:54 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.013 17:26:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:29:54.013 [2024-11-26 17:26:54.696103] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:29:54.013 [2024-11-26 17:26:54.696236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57750 ] 00:29:54.580 [2024-11-26 17:26:55.080078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.580 [2024-11-26 17:26:55.220749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.512 17:26:56 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.512 17:26:56 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:29:55.512 00:29:55.512 17:26:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:29:55.512 INFO: shutting down applications... 00:29:55.512 17:26:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:29:55.512 17:26:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:29:55.512 17:26:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:29:55.512 17:26:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:29:55.513 17:26:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57750 ]] 00:29:55.513 17:26:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57750 00:29:55.513 17:26:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:29:55.513 17:26:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:55.513 17:26:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57750 00:29:55.513 17:26:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:56.079 17:26:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:56.079 17:26:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:56.079 17:26:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57750 00:29:56.079 17:26:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:56.647 17:26:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:56.647 17:26:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:56.647 17:26:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57750 00:29:56.647 17:26:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:57.215 17:26:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:57.215 17:26:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:57.215 17:26:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57750 00:29:57.215 17:26:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:57.472 17:26:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:57.472 17:26:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:57.472 17:26:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57750 00:29:57.472 17:26:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:58.037 17:26:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:58.037 17:26:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:58.037 17:26:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57750 00:29:58.037 17:26:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:58.604 17:26:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:58.604 17:26:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:58.604 17:26:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57750 00:29:58.604 17:26:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:59.171 17:26:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:59.171 17:26:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:59.171 17:26:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57750 00:29:59.171 17:26:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:29:59.171 17:26:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:29:59.171 17:26:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:29:59.171 SPDK target shutdown done 00:29:59.171 17:26:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:29:59.171 Success 00:29:59.171 17:26:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:29:59.171 00:29:59.171 real 0m5.240s 00:29:59.171 user 0m4.753s 00:29:59.171 sys 0m0.613s 00:29:59.171 17:26:59 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.171 17:26:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:29:59.171 ************************************ 00:29:59.171 END TEST json_config_extra_key 00:29:59.171 ************************************ 00:29:59.171 17:26:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:29:59.171 17:26:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:59.171 17:26:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.171 17:26:59 -- common/autotest_common.sh@10 -- # set +x 00:29:59.171 ************************************ 00:29:59.171 START TEST alias_rpc 00:29:59.171 ************************************ 00:29:59.171 17:26:59 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:29:59.171 * Looking for test storage... 00:29:59.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:29:59.171 17:26:59 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:59.171 17:26:59 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:59.171 17:26:59 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.432 17:26:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:59.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.432 --rc genhtml_branch_coverage=1 00:29:59.432 --rc genhtml_function_coverage=1 00:29:59.432 --rc genhtml_legend=1 00:29:59.432 --rc geninfo_all_blocks=1 00:29:59.432 --rc geninfo_unexecuted_blocks=1 00:29:59.432 00:29:59.432 ' 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:59.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.432 --rc genhtml_branch_coverage=1 00:29:59.432 --rc genhtml_function_coverage=1 00:29:59.432 --rc genhtml_legend=1 00:29:59.432 --rc geninfo_all_blocks=1 00:29:59.432 --rc geninfo_unexecuted_blocks=1 00:29:59.432 00:29:59.432 ' 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:59.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.432 --rc genhtml_branch_coverage=1 00:29:59.432 --rc genhtml_function_coverage=1 00:29:59.432 --rc genhtml_legend=1 00:29:59.432 --rc geninfo_all_blocks=1 00:29:59.432 --rc geninfo_unexecuted_blocks=1 00:29:59.432 00:29:59.432 ' 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:59.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.432 --rc genhtml_branch_coverage=1 00:29:59.432 --rc genhtml_function_coverage=1 00:29:59.432 --rc genhtml_legend=1 00:29:59.432 --rc geninfo_all_blocks=1 00:29:59.432 --rc geninfo_unexecuted_blocks=1 00:29:59.432 00:29:59.432 ' 00:29:59.432 17:26:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:29:59.432 17:26:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57874 00:29:59.432 17:26:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:59.432 17:26:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57874 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57874 ']' 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.432 17:26:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:59.432 [2024-11-26 17:26:59.991636] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:29:59.432 [2024-11-26 17:26:59.991769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57874 ] 00:29:59.692 [2024-11-26 17:27:00.160259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.692 [2024-11-26 17:27:00.287134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.639 17:27:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.639 17:27:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:00.639 17:27:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:30:00.898 17:27:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57874 00:30:00.898 17:27:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57874 ']' 00:30:00.898 17:27:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57874 00:30:00.898 17:27:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:30:00.898 17:27:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.898 17:27:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57874 00:30:00.898 17:27:01 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:00.898 17:27:01 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:00.898 killing process with pid 57874 00:30:00.898 17:27:01 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57874' 00:30:00.898 17:27:01 alias_rpc -- common/autotest_common.sh@973 -- # kill 57874 00:30:00.898 17:27:01 alias_rpc -- common/autotest_common.sh@978 -- # wait 57874 00:30:04.188 00:30:04.188 real 0m4.753s 00:30:04.188 user 0m4.868s 00:30:04.188 sys 0m0.589s 00:30:04.188 17:27:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:04.188 17:27:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:04.188 ************************************ 00:30:04.188 END TEST alias_rpc 00:30:04.188 ************************************ 00:30:04.189 17:27:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:30:04.189 17:27:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:30:04.189 17:27:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:04.189 17:27:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:04.189 17:27:04 -- common/autotest_common.sh@10 -- # set +x 00:30:04.189 ************************************ 00:30:04.189 START TEST spdkcli_tcp 00:30:04.189 ************************************ 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:30:04.189 * Looking for test storage... 00:30:04.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:04.189 17:27:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:04.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.189 --rc genhtml_branch_coverage=1 00:30:04.189 --rc genhtml_function_coverage=1 00:30:04.189 --rc genhtml_legend=1 00:30:04.189 --rc geninfo_all_blocks=1 00:30:04.189 --rc geninfo_unexecuted_blocks=1 00:30:04.189 00:30:04.189 ' 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:04.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.189 --rc genhtml_branch_coverage=1 00:30:04.189 --rc genhtml_function_coverage=1 00:30:04.189 --rc genhtml_legend=1 00:30:04.189 --rc geninfo_all_blocks=1 00:30:04.189 --rc geninfo_unexecuted_blocks=1 00:30:04.189 00:30:04.189 ' 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:04.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.189 --rc genhtml_branch_coverage=1 00:30:04.189 --rc genhtml_function_coverage=1 00:30:04.189 --rc genhtml_legend=1 00:30:04.189 --rc geninfo_all_blocks=1 00:30:04.189 --rc geninfo_unexecuted_blocks=1 00:30:04.189 00:30:04.189 ' 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:04.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:04.189 --rc genhtml_branch_coverage=1 00:30:04.189 --rc genhtml_function_coverage=1 00:30:04.189 --rc genhtml_legend=1 00:30:04.189 --rc geninfo_all_blocks=1 00:30:04.189 --rc geninfo_unexecuted_blocks=1 00:30:04.189 00:30:04.189 ' 00:30:04.189 17:27:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:30:04.189 17:27:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:30:04.189 17:27:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:30:04.189 17:27:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:30:04.189 17:27:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:30:04.189 17:27:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:04.189 17:27:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:04.189 17:27:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57986 00:30:04.189 17:27:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57986 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57986 ']' 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.189 17:27:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.189 17:27:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:04.189 [2024-11-26 17:27:04.868756] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:04.189 [2024-11-26 17:27:04.868932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57986 ] 00:30:04.448 [2024-11-26 17:27:05.051801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:04.707 [2024-11-26 17:27:05.195542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.707 [2024-11-26 17:27:05.195615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.646 17:27:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.646 17:27:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:30:05.646 17:27:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58009 00:30:05.646 17:27:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:30:05.646 17:27:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:30:06.029 [ 00:30:06.029 "bdev_malloc_delete", 00:30:06.029 "bdev_malloc_create", 00:30:06.029 "bdev_null_resize", 00:30:06.029 "bdev_null_delete", 00:30:06.029 "bdev_null_create", 00:30:06.029 "bdev_nvme_cuse_unregister", 00:30:06.029 "bdev_nvme_cuse_register", 00:30:06.029 "bdev_opal_new_user", 00:30:06.029 "bdev_opal_set_lock_state", 00:30:06.029 "bdev_opal_delete", 00:30:06.029 "bdev_opal_get_info", 00:30:06.029 "bdev_opal_create", 00:30:06.029 "bdev_nvme_opal_revert", 00:30:06.029 "bdev_nvme_opal_init", 00:30:06.029 "bdev_nvme_send_cmd", 00:30:06.029 "bdev_nvme_set_keys", 00:30:06.029 "bdev_nvme_get_path_iostat", 00:30:06.029 "bdev_nvme_get_mdns_discovery_info", 00:30:06.029 "bdev_nvme_stop_mdns_discovery", 00:30:06.029 "bdev_nvme_start_mdns_discovery", 00:30:06.029 "bdev_nvme_set_multipath_policy", 00:30:06.029 "bdev_nvme_set_preferred_path", 00:30:06.029 "bdev_nvme_get_io_paths", 00:30:06.029 "bdev_nvme_remove_error_injection", 00:30:06.029 "bdev_nvme_add_error_injection", 00:30:06.029 "bdev_nvme_get_discovery_info", 00:30:06.029 "bdev_nvme_stop_discovery", 00:30:06.029 "bdev_nvme_start_discovery", 00:30:06.029 "bdev_nvme_get_controller_health_info", 00:30:06.029 "bdev_nvme_disable_controller", 00:30:06.029 "bdev_nvme_enable_controller", 00:30:06.029 "bdev_nvme_reset_controller", 00:30:06.029 "bdev_nvme_get_transport_statistics", 00:30:06.029 "bdev_nvme_apply_firmware", 00:30:06.029 "bdev_nvme_detach_controller", 00:30:06.029 "bdev_nvme_get_controllers", 00:30:06.029 "bdev_nvme_attach_controller", 00:30:06.029 "bdev_nvme_set_hotplug", 00:30:06.029 "bdev_nvme_set_options", 00:30:06.029 "bdev_passthru_delete", 00:30:06.029 "bdev_passthru_create", 00:30:06.029 "bdev_lvol_set_parent_bdev", 00:30:06.029 "bdev_lvol_set_parent", 00:30:06.029 "bdev_lvol_check_shallow_copy", 00:30:06.029 "bdev_lvol_start_shallow_copy", 00:30:06.029 "bdev_lvol_grow_lvstore", 00:30:06.029 "bdev_lvol_get_lvols", 00:30:06.029 "bdev_lvol_get_lvstores", 00:30:06.029 "bdev_lvol_delete", 00:30:06.029 "bdev_lvol_set_read_only", 00:30:06.029 "bdev_lvol_resize", 00:30:06.029 "bdev_lvol_decouple_parent", 00:30:06.029 "bdev_lvol_inflate", 00:30:06.029 "bdev_lvol_rename", 00:30:06.029 "bdev_lvol_clone_bdev", 00:30:06.029 "bdev_lvol_clone", 00:30:06.029 "bdev_lvol_snapshot", 00:30:06.029 "bdev_lvol_create", 00:30:06.029 "bdev_lvol_delete_lvstore", 00:30:06.029 "bdev_lvol_rename_lvstore", 00:30:06.029 "bdev_lvol_create_lvstore", 00:30:06.029 "bdev_raid_set_options", 00:30:06.029 "bdev_raid_remove_base_bdev", 00:30:06.029 "bdev_raid_add_base_bdev", 00:30:06.029 "bdev_raid_delete", 00:30:06.029 "bdev_raid_create", 00:30:06.029 "bdev_raid_get_bdevs", 00:30:06.029 "bdev_error_inject_error", 00:30:06.029 "bdev_error_delete", 00:30:06.029 "bdev_error_create", 00:30:06.029 "bdev_split_delete", 00:30:06.029 "bdev_split_create", 00:30:06.029 "bdev_delay_delete", 00:30:06.029 "bdev_delay_create", 00:30:06.029 "bdev_delay_update_latency", 00:30:06.029 "bdev_zone_block_delete", 00:30:06.029 "bdev_zone_block_create", 00:30:06.029 "blobfs_create", 00:30:06.029 "blobfs_detect", 00:30:06.029 "blobfs_set_cache_size", 00:30:06.029 "bdev_aio_delete", 00:30:06.029 "bdev_aio_rescan", 00:30:06.029 "bdev_aio_create", 00:30:06.029 "bdev_ftl_set_property", 00:30:06.029 "bdev_ftl_get_properties", 00:30:06.029 "bdev_ftl_get_stats", 00:30:06.029 "bdev_ftl_unmap", 00:30:06.029 "bdev_ftl_unload", 00:30:06.029 "bdev_ftl_delete", 00:30:06.029 "bdev_ftl_load", 00:30:06.029 "bdev_ftl_create", 00:30:06.029 "bdev_virtio_attach_controller", 00:30:06.029 "bdev_virtio_scsi_get_devices", 00:30:06.029 "bdev_virtio_detach_controller", 00:30:06.029 "bdev_virtio_blk_set_hotplug", 00:30:06.029 "bdev_iscsi_delete", 00:30:06.029 "bdev_iscsi_create", 00:30:06.029 "bdev_iscsi_set_options", 00:30:06.029 "accel_error_inject_error", 00:30:06.029 "ioat_scan_accel_module", 00:30:06.029 "dsa_scan_accel_module", 00:30:06.029 "iaa_scan_accel_module", 00:30:06.029 "keyring_file_remove_key", 00:30:06.029 "keyring_file_add_key", 00:30:06.029 "keyring_linux_set_options", 00:30:06.029 "fsdev_aio_delete", 00:30:06.029 "fsdev_aio_create", 00:30:06.029 "iscsi_get_histogram", 00:30:06.029 "iscsi_enable_histogram", 00:30:06.029 "iscsi_set_options", 00:30:06.029 "iscsi_get_auth_groups", 00:30:06.029 "iscsi_auth_group_remove_secret", 00:30:06.029 "iscsi_auth_group_add_secret", 00:30:06.029 "iscsi_delete_auth_group", 00:30:06.029 "iscsi_create_auth_group", 00:30:06.029 "iscsi_set_discovery_auth", 00:30:06.029 "iscsi_get_options", 00:30:06.029 "iscsi_target_node_request_logout", 00:30:06.029 "iscsi_target_node_set_redirect", 00:30:06.029 "iscsi_target_node_set_auth", 00:30:06.029 "iscsi_target_node_add_lun", 00:30:06.029 "iscsi_get_stats", 00:30:06.029 "iscsi_get_connections", 00:30:06.029 "iscsi_portal_group_set_auth", 00:30:06.029 "iscsi_start_portal_group", 00:30:06.029 "iscsi_delete_portal_group", 00:30:06.029 "iscsi_create_portal_group", 00:30:06.029 "iscsi_get_portal_groups", 00:30:06.029 "iscsi_delete_target_node", 00:30:06.029 "iscsi_target_node_remove_pg_ig_maps", 00:30:06.029 "iscsi_target_node_add_pg_ig_maps", 00:30:06.029 "iscsi_create_target_node", 00:30:06.029 "iscsi_get_target_nodes", 00:30:06.029 "iscsi_delete_initiator_group", 00:30:06.029 "iscsi_initiator_group_remove_initiators", 00:30:06.029 "iscsi_initiator_group_add_initiators", 00:30:06.029 "iscsi_create_initiator_group", 00:30:06.029 "iscsi_get_initiator_groups", 00:30:06.029 "nvmf_set_crdt", 00:30:06.029 "nvmf_set_config", 00:30:06.029 "nvmf_set_max_subsystems", 00:30:06.029 "nvmf_stop_mdns_prr", 00:30:06.029 "nvmf_publish_mdns_prr", 00:30:06.029 "nvmf_subsystem_get_listeners", 00:30:06.029 "nvmf_subsystem_get_qpairs", 00:30:06.029 "nvmf_subsystem_get_controllers", 00:30:06.029 "nvmf_get_stats", 00:30:06.029 "nvmf_get_transports", 00:30:06.029 "nvmf_create_transport", 00:30:06.029 "nvmf_get_targets", 00:30:06.029 "nvmf_delete_target", 00:30:06.029 "nvmf_create_target", 00:30:06.029 "nvmf_subsystem_allow_any_host", 00:30:06.029 "nvmf_subsystem_set_keys", 00:30:06.029 "nvmf_subsystem_remove_host", 00:30:06.029 "nvmf_subsystem_add_host", 00:30:06.029 "nvmf_ns_remove_host", 00:30:06.029 "nvmf_ns_add_host", 00:30:06.029 "nvmf_subsystem_remove_ns", 00:30:06.029 "nvmf_subsystem_set_ns_ana_group", 00:30:06.029 "nvmf_subsystem_add_ns", 00:30:06.029 "nvmf_subsystem_listener_set_ana_state", 00:30:06.029 "nvmf_discovery_get_referrals", 00:30:06.029 "nvmf_discovery_remove_referral", 00:30:06.029 "nvmf_discovery_add_referral", 00:30:06.029 "nvmf_subsystem_remove_listener", 00:30:06.029 "nvmf_subsystem_add_listener", 00:30:06.029 "nvmf_delete_subsystem", 00:30:06.029 "nvmf_create_subsystem", 00:30:06.029 "nvmf_get_subsystems", 00:30:06.029 "env_dpdk_get_mem_stats", 00:30:06.029 "nbd_get_disks", 00:30:06.029 "nbd_stop_disk", 00:30:06.029 "nbd_start_disk", 00:30:06.029 "ublk_recover_disk", 00:30:06.029 "ublk_get_disks", 00:30:06.029 "ublk_stop_disk", 00:30:06.029 "ublk_start_disk", 00:30:06.029 "ublk_destroy_target", 00:30:06.029 "ublk_create_target", 00:30:06.029 "virtio_blk_create_transport", 00:30:06.029 "virtio_blk_get_transports", 00:30:06.029 "vhost_controller_set_coalescing", 00:30:06.030 "vhost_get_controllers", 00:30:06.030 "vhost_delete_controller", 00:30:06.030 "vhost_create_blk_controller", 00:30:06.030 "vhost_scsi_controller_remove_target", 00:30:06.030 "vhost_scsi_controller_add_target", 00:30:06.030 "vhost_start_scsi_controller", 00:30:06.030 "vhost_create_scsi_controller", 00:30:06.030 "thread_set_cpumask", 00:30:06.030 "scheduler_set_options", 00:30:06.030 "framework_get_governor", 00:30:06.030 "framework_get_scheduler", 00:30:06.030 "framework_set_scheduler", 00:30:06.030 "framework_get_reactors", 00:30:06.030 "thread_get_io_channels", 00:30:06.030 "thread_get_pollers", 00:30:06.030 "thread_get_stats", 00:30:06.030 "framework_monitor_context_switch", 00:30:06.030 "spdk_kill_instance", 00:30:06.030 "log_enable_timestamps", 00:30:06.030 "log_get_flags", 00:30:06.030 "log_clear_flag", 00:30:06.030 "log_set_flag", 00:30:06.030 "log_get_level", 00:30:06.030 "log_set_level", 00:30:06.030 "log_get_print_level", 00:30:06.030 "log_set_print_level", 00:30:06.030 "framework_enable_cpumask_locks", 00:30:06.030 "framework_disable_cpumask_locks", 00:30:06.030 "framework_wait_init", 00:30:06.030 "framework_start_init", 00:30:06.030 "scsi_get_devices", 00:30:06.030 "bdev_get_histogram", 00:30:06.030 "bdev_enable_histogram", 00:30:06.030 "bdev_set_qos_limit", 00:30:06.030 "bdev_set_qd_sampling_period", 00:30:06.030 "bdev_get_bdevs", 00:30:06.030 "bdev_reset_iostat", 00:30:06.030 "bdev_get_iostat", 00:30:06.030 "bdev_examine", 00:30:06.030 "bdev_wait_for_examine", 00:30:06.030 "bdev_set_options", 00:30:06.030 "accel_get_stats", 00:30:06.030 "accel_set_options", 00:30:06.030 "accel_set_driver", 00:30:06.030 "accel_crypto_key_destroy", 00:30:06.030 "accel_crypto_keys_get", 00:30:06.030 "accel_crypto_key_create", 00:30:06.030 "accel_assign_opc", 00:30:06.030 "accel_get_module_info", 00:30:06.030 "accel_get_opc_assignments", 00:30:06.030 "vmd_rescan", 00:30:06.030 "vmd_remove_device", 00:30:06.030 "vmd_enable", 00:30:06.030 "sock_get_default_impl", 00:30:06.030 "sock_set_default_impl", 00:30:06.030 "sock_impl_set_options", 00:30:06.030 "sock_impl_get_options", 00:30:06.030 "iobuf_get_stats", 00:30:06.030 "iobuf_set_options", 00:30:06.030 "keyring_get_keys", 00:30:06.030 "framework_get_pci_devices", 00:30:06.030 "framework_get_config", 00:30:06.030 "framework_get_subsystems", 00:30:06.030 "fsdev_set_opts", 00:30:06.030 "fsdev_get_opts", 00:30:06.030 "trace_get_info", 00:30:06.030 "trace_get_tpoint_group_mask", 00:30:06.030 "trace_disable_tpoint_group", 00:30:06.030 "trace_enable_tpoint_group", 00:30:06.030 "trace_clear_tpoint_mask", 00:30:06.030 "trace_set_tpoint_mask", 00:30:06.030 "notify_get_notifications", 00:30:06.030 "notify_get_types", 00:30:06.030 "spdk_get_version", 00:30:06.030 "rpc_get_methods" 00:30:06.030 ] 00:30:06.030 17:27:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.030 17:27:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:06.030 17:27:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57986 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57986 ']' 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57986 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57986 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:06.030 killing process with pid 57986 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57986' 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57986 00:30:06.030 17:27:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57986 00:30:09.337 00:30:09.337 real 0m5.002s 00:30:09.337 user 0m9.004s 00:30:09.337 sys 0m0.676s 00:30:09.337 17:27:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.337 17:27:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:09.337 ************************************ 00:30:09.337 END TEST spdkcli_tcp 00:30:09.337 ************************************ 00:30:09.337 17:27:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:30:09.337 17:27:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:09.337 17:27:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.337 17:27:09 -- common/autotest_common.sh@10 -- # set +x 00:30:09.337 ************************************ 00:30:09.337 START TEST dpdk_mem_utility 00:30:09.337 ************************************ 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:30:09.337 * Looking for test storage... 00:30:09.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.337 17:27:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:09.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.337 --rc genhtml_branch_coverage=1 00:30:09.337 --rc genhtml_function_coverage=1 00:30:09.337 --rc genhtml_legend=1 00:30:09.337 --rc geninfo_all_blocks=1 00:30:09.337 --rc geninfo_unexecuted_blocks=1 00:30:09.337 00:30:09.337 ' 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:09.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.337 --rc genhtml_branch_coverage=1 00:30:09.337 --rc genhtml_function_coverage=1 00:30:09.337 --rc genhtml_legend=1 00:30:09.337 --rc geninfo_all_blocks=1 00:30:09.337 --rc geninfo_unexecuted_blocks=1 00:30:09.337 00:30:09.337 ' 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:09.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.337 --rc genhtml_branch_coverage=1 00:30:09.337 --rc genhtml_function_coverage=1 00:30:09.337 --rc genhtml_legend=1 00:30:09.337 --rc geninfo_all_blocks=1 00:30:09.337 --rc geninfo_unexecuted_blocks=1 00:30:09.337 00:30:09.337 ' 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:09.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.337 --rc genhtml_branch_coverage=1 00:30:09.337 --rc genhtml_function_coverage=1 00:30:09.337 --rc genhtml_legend=1 00:30:09.337 --rc geninfo_all_blocks=1 00:30:09.337 --rc geninfo_unexecuted_blocks=1 00:30:09.337 00:30:09.337 ' 00:30:09.337 17:27:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:30:09.337 17:27:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58119 00:30:09.337 17:27:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:09.337 17:27:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58119 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58119 ']' 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.337 17:27:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:30:09.337 [2024-11-26 17:27:09.966238] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:09.338 [2024-11-26 17:27:09.966412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58119 ] 00:30:09.596 [2024-11-26 17:27:10.151255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.859 [2024-11-26 17:27:10.291685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.798 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.798 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:30:10.798 17:27:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:30:10.798 17:27:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:30:10.798 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.798 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:30:10.798 { 00:30:10.798 "filename": "/tmp/spdk_mem_dump.txt" 00:30:10.798 } 00:30:10.798 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.798 17:27:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:30:10.798 DPDK memory size 824.000000 MiB in 1 heap(s) 00:30:10.798 1 heaps totaling size 824.000000 MiB 00:30:10.798 size: 824.000000 MiB heap id: 0 00:30:10.798 end heaps---------- 00:30:10.798 9 mempools totaling size 603.782043 MiB 00:30:10.798 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:30:10.798 size: 158.602051 MiB name: PDU_data_out_Pool 00:30:10.798 size: 100.555481 MiB name: bdev_io_58119 00:30:10.798 size: 50.003479 MiB name: msgpool_58119 00:30:10.798 size: 36.509338 MiB name: fsdev_io_58119 00:30:10.798 size: 21.763794 MiB name: PDU_Pool 00:30:10.798 size: 19.513306 MiB name: SCSI_TASK_Pool 00:30:10.798 size: 4.133484 MiB name: evtpool_58119 00:30:10.798 size: 0.026123 MiB name: Session_Pool 00:30:10.798 end mempools------- 00:30:10.798 6 memzones totaling size 4.142822 MiB 00:30:10.798 size: 1.000366 MiB name: RG_ring_0_58119 00:30:10.798 size: 1.000366 MiB name: RG_ring_1_58119 00:30:10.798 size: 1.000366 MiB name: RG_ring_4_58119 00:30:10.798 size: 1.000366 MiB name: RG_ring_5_58119 00:30:10.798 size: 0.125366 MiB name: RG_ring_2_58119 00:30:10.798 size: 0.015991 MiB name: RG_ring_3_58119 00:30:10.798 end memzones------- 00:30:10.798 17:27:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:30:10.798 heap id: 0 total size: 824.000000 MiB number of busy elements: 326 number of free elements: 18 00:30:10.798 list of free elements. size: 16.778687 MiB 00:30:10.798 element at address: 0x200006400000 with size: 1.995972 MiB 00:30:10.798 element at address: 0x20000a600000 with size: 1.995972 MiB 00:30:10.798 element at address: 0x200003e00000 with size: 1.991028 MiB 00:30:10.798 element at address: 0x200019500040 with size: 0.999939 MiB 00:30:10.798 element at address: 0x200019900040 with size: 0.999939 MiB 00:30:10.798 element at address: 0x200019a00000 with size: 0.999084 MiB 00:30:10.798 element at address: 0x200032600000 with size: 0.994324 MiB 00:30:10.798 element at address: 0x200000400000 with size: 0.992004 MiB 00:30:10.798 element at address: 0x200019200000 with size: 0.959656 MiB 00:30:10.798 element at address: 0x200019d00040 with size: 0.936401 MiB 00:30:10.798 element at address: 0x200000200000 with size: 0.716980 MiB 00:30:10.798 element at address: 0x20001b400000 with size: 0.559998 MiB 00:30:10.798 element at address: 0x200000c00000 with size: 0.489197 MiB 00:30:10.798 element at address: 0x200019600000 with size: 0.487976 MiB 00:30:10.798 element at address: 0x200019e00000 with size: 0.485413 MiB 00:30:10.798 element at address: 0x200012c00000 with size: 0.433472 MiB 00:30:10.798 element at address: 0x200028800000 with size: 0.390442 MiB 00:30:10.798 element at address: 0x200000800000 with size: 0.350891 MiB 00:30:10.798 list of standard malloc elements. size: 199.290405 MiB 00:30:10.798 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:30:10.798 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:30:10.798 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:30:10.798 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:30:10.798 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:30:10.798 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:30:10.798 element at address: 0x200019deff40 with size: 0.062683 MiB 00:30:10.798 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:30:10.798 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:30:10.798 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:30:10.798 element at address: 0x200012bff040 with size: 0.000305 MiB 00:30:10.798 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:30:10.798 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:30:10.798 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:30:10.798 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:30:10.798 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:30:10.798 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:30:10.798 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:30:10.798 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:30:10.798 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:30:10.799 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200000cff000 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bff180 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bff280 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bff380 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bff480 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bff580 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bff680 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bff780 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bff880 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bff980 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200019affc40 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:30:10.799 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:30:10.800 element at address: 0x200028863f40 with size: 0.000244 MiB 00:30:10.800 element at address: 0x200028864040 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886af80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886b080 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886b180 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886b280 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886b380 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886b480 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886b580 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886b680 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886b780 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886b880 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886b980 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886be80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886c080 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886c180 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886c280 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886c380 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886c480 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886c580 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886c680 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886c780 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886c880 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886c980 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886d080 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886d180 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886d280 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886d380 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886d480 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886d580 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886d680 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886d780 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886d880 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886d980 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886da80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886db80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886de80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886df80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886e080 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886e180 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886e280 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886e380 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886e480 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886e580 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886e680 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886e780 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886e880 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886e980 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886f080 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886f180 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886f280 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886f380 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886f480 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886f580 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886f680 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886f780 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886f880 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886f980 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:30:10.800 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:30:10.800 list of memzone associated elements. size: 607.930908 MiB 00:30:10.800 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:30:10.800 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:30:10.800 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:30:10.800 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:30:10.800 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:30:10.800 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58119_0 00:30:10.800 element at address: 0x200000dff340 with size: 48.003113 MiB 00:30:10.800 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58119_0 00:30:10.800 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:30:10.800 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58119_0 00:30:10.800 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:30:10.800 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:30:10.800 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:30:10.800 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:30:10.800 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:30:10.801 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58119_0 00:30:10.801 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:30:10.801 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58119 00:30:10.801 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:30:10.801 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58119 00:30:10.801 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:30:10.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:30:10.801 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:30:10.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:30:10.801 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:30:10.801 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:30:10.801 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:30:10.801 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:30:10.801 element at address: 0x200000cff100 with size: 1.000549 MiB 00:30:10.801 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58119 00:30:10.801 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:30:10.801 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58119 00:30:10.801 element at address: 0x200019affd40 with size: 1.000549 MiB 00:30:10.801 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58119 00:30:10.801 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:30:10.801 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58119 00:30:10.801 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:30:10.801 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58119 00:30:10.801 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:30:10.801 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58119 00:30:10.801 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:30:10.801 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:30:10.801 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:30:10.801 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:30:10.801 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:30:10.801 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:30:10.801 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:30:10.801 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58119 00:30:10.801 element at address: 0x20000085df80 with size: 0.125549 MiB 00:30:10.801 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58119 00:30:10.801 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:30:10.801 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:30:10.801 element at address: 0x200028864140 with size: 0.023804 MiB 00:30:10.801 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:30:10.801 element at address: 0x200000859d40 with size: 0.016174 MiB 00:30:10.801 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58119 00:30:10.801 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:30:10.801 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:30:10.801 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:30:10.801 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58119 00:30:10.801 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:30:10.801 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58119 00:30:10.801 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:30:10.801 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58119 00:30:10.801 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:30:10.801 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:30:10.801 17:27:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:30:10.801 17:27:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58119 00:30:10.801 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58119 ']' 00:30:10.801 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58119 00:30:10.801 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:30:10.801 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.801 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58119 00:30:11.061 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:11.061 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:11.061 killing process with pid 58119 00:30:11.061 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58119' 00:30:11.061 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58119 00:30:11.061 17:27:11 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58119 00:30:14.441 00:30:14.441 real 0m4.821s 00:30:14.441 user 0m4.783s 00:30:14.441 sys 0m0.658s 00:30:14.441 17:27:14 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.441 17:27:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:30:14.441 ************************************ 00:30:14.441 END TEST dpdk_mem_utility 00:30:14.441 ************************************ 00:30:14.441 17:27:14 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:30:14.441 17:27:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:14.441 17:27:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.441 17:27:14 -- common/autotest_common.sh@10 -- # set +x 00:30:14.441 ************************************ 00:30:14.441 START TEST event 00:30:14.441 ************************************ 00:30:14.441 17:27:14 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:30:14.441 * Looking for test storage... 00:30:14.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:30:14.441 17:27:14 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:14.441 17:27:14 event -- common/autotest_common.sh@1693 -- # lcov --version 00:30:14.441 17:27:14 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:14.441 17:27:14 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:14.441 17:27:14 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.441 17:27:14 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.441 17:27:14 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.441 17:27:14 event -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.441 17:27:14 event -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.441 17:27:14 event -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.441 17:27:14 event -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.441 17:27:14 event -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.441 17:27:14 event -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.441 17:27:14 event -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.441 17:27:14 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.441 17:27:14 event -- scripts/common.sh@344 -- # case "$op" in 00:30:14.441 17:27:14 event -- scripts/common.sh@345 -- # : 1 00:30:14.441 17:27:14 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.441 17:27:14 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.441 17:27:14 event -- scripts/common.sh@365 -- # decimal 1 00:30:14.441 17:27:14 event -- scripts/common.sh@353 -- # local d=1 00:30:14.441 17:27:14 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.441 17:27:14 event -- scripts/common.sh@355 -- # echo 1 00:30:14.441 17:27:14 event -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.441 17:27:14 event -- scripts/common.sh@366 -- # decimal 2 00:30:14.441 17:27:14 event -- scripts/common.sh@353 -- # local d=2 00:30:14.441 17:27:14 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.441 17:27:14 event -- scripts/common.sh@355 -- # echo 2 00:30:14.441 17:27:14 event -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.441 17:27:14 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.441 17:27:14 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.442 17:27:14 event -- scripts/common.sh@368 -- # return 0 00:30:14.442 17:27:14 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.442 17:27:14 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:14.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.442 --rc genhtml_branch_coverage=1 00:30:14.442 --rc genhtml_function_coverage=1 00:30:14.442 --rc genhtml_legend=1 00:30:14.442 --rc geninfo_all_blocks=1 00:30:14.442 --rc geninfo_unexecuted_blocks=1 00:30:14.442 00:30:14.442 ' 00:30:14.442 17:27:14 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:14.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.442 --rc genhtml_branch_coverage=1 00:30:14.442 --rc genhtml_function_coverage=1 00:30:14.442 --rc genhtml_legend=1 00:30:14.442 --rc geninfo_all_blocks=1 00:30:14.442 --rc geninfo_unexecuted_blocks=1 00:30:14.442 00:30:14.442 ' 00:30:14.442 17:27:14 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:14.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.442 --rc genhtml_branch_coverage=1 00:30:14.442 --rc genhtml_function_coverage=1 00:30:14.442 --rc genhtml_legend=1 00:30:14.442 --rc geninfo_all_blocks=1 00:30:14.442 --rc geninfo_unexecuted_blocks=1 00:30:14.442 00:30:14.442 ' 00:30:14.442 17:27:14 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:14.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.442 --rc genhtml_branch_coverage=1 00:30:14.442 --rc genhtml_function_coverage=1 00:30:14.442 --rc genhtml_legend=1 00:30:14.442 --rc geninfo_all_blocks=1 00:30:14.442 --rc geninfo_unexecuted_blocks=1 00:30:14.442 00:30:14.442 ' 00:30:14.442 17:27:14 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:30:14.442 17:27:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:30:14.442 17:27:14 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:30:14.442 17:27:14 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:14.442 17:27:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.442 17:27:14 event -- common/autotest_common.sh@10 -- # set +x 00:30:14.442 ************************************ 00:30:14.442 START TEST event_perf 00:30:14.442 ************************************ 00:30:14.442 17:27:14 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:30:14.442 Running I/O for 1 seconds...[2024-11-26 17:27:14.734333] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:14.442 [2024-11-26 17:27:14.734476] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58233 ] 00:30:14.442 [2024-11-26 17:27:14.910149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:14.442 [2024-11-26 17:27:15.085754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.442 [2024-11-26 17:27:15.086019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.442 Running I/O for 1 seconds...[2024-11-26 17:27:15.086193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.442 [2024-11-26 17:27:15.086286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.821 00:30:15.821 lcore 0: 82462 00:30:15.821 lcore 1: 82459 00:30:15.821 lcore 2: 82457 00:30:15.821 lcore 3: 82459 00:30:15.821 done. 00:30:15.821 00:30:15.821 real 0m1.701s 00:30:15.821 user 0m4.432s 00:30:15.821 sys 0m0.137s 00:30:15.821 17:27:16 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.821 17:27:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:30:15.821 ************************************ 00:30:15.821 END TEST event_perf 00:30:15.821 ************************************ 00:30:15.821 17:27:16 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:30:15.821 17:27:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:15.821 17:27:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.821 17:27:16 event -- common/autotest_common.sh@10 -- # set +x 00:30:15.821 ************************************ 00:30:15.821 START TEST event_reactor 00:30:15.821 ************************************ 00:30:15.821 17:27:16 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:30:15.821 [2024-11-26 17:27:16.486247] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:15.821 [2024-11-26 17:27:16.486422] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58278 ] 00:30:16.081 [2024-11-26 17:27:16.683724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.340 [2024-11-26 17:27:16.818604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.719 test_start 00:30:17.719 oneshot 00:30:17.719 tick 100 00:30:17.719 tick 100 00:30:17.719 tick 250 00:30:17.719 tick 100 00:30:17.719 tick 100 00:30:17.719 tick 100 00:30:17.719 tick 250 00:30:17.719 tick 500 00:30:17.719 tick 100 00:30:17.719 tick 100 00:30:17.719 tick 250 00:30:17.719 tick 100 00:30:17.719 tick 100 00:30:17.719 test_end 00:30:17.719 00:30:17.719 real 0m1.648s 00:30:17.719 user 0m1.431s 00:30:17.719 sys 0m0.107s 00:30:17.719 17:27:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.719 17:27:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:30:17.719 ************************************ 00:30:17.719 END TEST event_reactor 00:30:17.719 ************************************ 00:30:17.719 17:27:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:30:17.719 17:27:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:17.719 17:27:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.719 17:27:18 event -- common/autotest_common.sh@10 -- # set +x 00:30:17.719 ************************************ 00:30:17.719 START TEST event_reactor_perf 00:30:17.719 ************************************ 00:30:17.719 17:27:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:30:17.719 [2024-11-26 17:27:18.202464] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:17.720 [2024-11-26 17:27:18.202636] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58314 ] 00:30:17.720 [2024-11-26 17:27:18.385288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.978 [2024-11-26 17:27:18.538407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.362 test_start 00:30:19.362 test_end 00:30:19.362 Performance: 282121 events per second 00:30:19.362 00:30:19.362 real 0m1.679s 00:30:19.362 user 0m1.445s 00:30:19.362 sys 0m0.122s 00:30:19.362 17:27:19 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.362 17:27:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:30:19.362 ************************************ 00:30:19.362 END TEST event_reactor_perf 00:30:19.362 ************************************ 00:30:19.362 17:27:19 event -- event/event.sh@49 -- # uname -s 00:30:19.362 17:27:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:30:19.362 17:27:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:30:19.362 17:27:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:19.362 17:27:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.362 17:27:19 event -- common/autotest_common.sh@10 -- # set +x 00:30:19.362 ************************************ 00:30:19.362 START TEST event_scheduler 00:30:19.362 ************************************ 00:30:19.362 17:27:19 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:30:19.362 * Looking for test storage... 00:30:19.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:30:19.362 17:27:19 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:19.362 17:27:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:30:19.362 17:27:19 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.622 17:27:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:19.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.622 --rc genhtml_branch_coverage=1 00:30:19.622 --rc genhtml_function_coverage=1 00:30:19.622 --rc genhtml_legend=1 00:30:19.622 --rc geninfo_all_blocks=1 00:30:19.622 --rc geninfo_unexecuted_blocks=1 00:30:19.622 00:30:19.622 ' 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:19.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.622 --rc genhtml_branch_coverage=1 00:30:19.622 --rc genhtml_function_coverage=1 00:30:19.622 --rc genhtml_legend=1 00:30:19.622 --rc geninfo_all_blocks=1 00:30:19.622 --rc geninfo_unexecuted_blocks=1 00:30:19.622 00:30:19.622 ' 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:19.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.622 --rc genhtml_branch_coverage=1 00:30:19.622 --rc genhtml_function_coverage=1 00:30:19.622 --rc genhtml_legend=1 00:30:19.622 --rc geninfo_all_blocks=1 00:30:19.622 --rc geninfo_unexecuted_blocks=1 00:30:19.622 00:30:19.622 ' 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:19.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.622 --rc genhtml_branch_coverage=1 00:30:19.622 --rc genhtml_function_coverage=1 00:30:19.622 --rc genhtml_legend=1 00:30:19.622 --rc geninfo_all_blocks=1 00:30:19.622 --rc geninfo_unexecuted_blocks=1 00:30:19.622 00:30:19.622 ' 00:30:19.622 17:27:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:30:19.622 17:27:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58385 00:30:19.622 17:27:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:30:19.622 17:27:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58385 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58385 ']' 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.622 17:27:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.622 17:27:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:30:19.622 [2024-11-26 17:27:20.176464] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:19.622 [2024-11-26 17:27:20.176627] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58385 ] 00:30:19.882 [2024-11-26 17:27:20.344840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:19.882 [2024-11-26 17:27:20.542641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.882 [2024-11-26 17:27:20.542769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:19.882 [2024-11-26 17:27:20.542685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.882 [2024-11-26 17:27:20.542964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:20.828 17:27:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.828 17:27:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:30:20.828 17:27:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:30:20.828 17:27:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.828 17:27:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:30:20.828 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:30:20.828 POWER: Cannot set governor of lcore 0 to userspace 00:30:20.828 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:30:20.828 POWER: Cannot set governor of lcore 0 to performance 00:30:20.828 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:30:20.828 POWER: Cannot set governor of lcore 0 to userspace 00:30:20.828 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:30:20.828 POWER: Cannot set governor of lcore 0 to userspace 00:30:20.829 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:30:20.829 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:30:20.829 POWER: Unable to set Power Management Environment for lcore 0 00:30:20.829 [2024-11-26 17:27:21.276204] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:30:20.829 [2024-11-26 17:27:21.276231] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:30:20.829 [2024-11-26 17:27:21.276243] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:30:20.829 [2024-11-26 17:27:21.276266] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:30:20.829 [2024-11-26 17:27:21.276275] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:30:20.829 [2024-11-26 17:27:21.276286] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:30:20.829 17:27:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.829 17:27:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:30:20.829 17:27:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.829 17:27:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 [2024-11-26 17:27:21.671308] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:30:21.090 17:27:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.090 17:27:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:30:21.090 17:27:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:21.090 17:27:21 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 ************************************ 00:30:21.090 START TEST scheduler_create_thread 00:30:21.090 ************************************ 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 2 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 3 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 4 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 5 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 6 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 7 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 8 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 9 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:21.090 10 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.090 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:21.348 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.348 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:30:21.348 17:27:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:30:21.348 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.348 17:27:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:22.283 17:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.283 17:27:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:30:22.283 17:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.283 17:27:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:23.658 17:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.658 17:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:30:23.658 17:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:30:23.658 17:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.658 17:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:24.596 ************************************ 00:30:24.596 END TEST scheduler_create_thread 00:30:24.596 ************************************ 00:30:24.596 17:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.596 00:30:24.596 real 0m3.377s 00:30:24.596 user 0m0.021s 00:30:24.596 sys 0m0.007s 00:30:24.596 17:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.596 17:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:30:24.596 17:27:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:30:24.596 17:27:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58385 00:30:24.596 17:27:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58385 ']' 00:30:24.596 17:27:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58385 00:30:24.596 17:27:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:30:24.596 17:27:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.596 17:27:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58385 00:30:24.596 killing process with pid 58385 00:30:24.596 17:27:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:24.596 17:27:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:24.596 17:27:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58385' 00:30:24.596 17:27:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58385 00:30:24.596 17:27:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58385 00:30:24.854 [2024-11-26 17:27:25.441868] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:30:26.228 00:30:26.228 real 0m6.985s 00:30:26.228 user 0m15.217s 00:30:26.228 sys 0m0.486s 00:30:26.228 17:27:26 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.228 17:27:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:30:26.228 ************************************ 00:30:26.228 END TEST event_scheduler 00:30:26.228 ************************************ 00:30:26.228 17:27:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:30:26.228 17:27:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:30:26.228 17:27:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:26.228 17:27:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.228 17:27:26 event -- common/autotest_common.sh@10 -- # set +x 00:30:26.228 ************************************ 00:30:26.228 START TEST app_repeat 00:30:26.228 ************************************ 00:30:26.228 17:27:26 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58513 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58513' 00:30:26.228 Process app_repeat pid: 58513 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:30:26.228 spdk_app_start Round 0 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:30:26.228 17:27:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58513 /var/tmp/spdk-nbd.sock 00:30:26.228 17:27:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58513 ']' 00:30:26.228 17:27:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:26.228 17:27:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:26.228 17:27:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:26.228 17:27:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.228 17:27:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:26.485 [2024-11-26 17:27:26.975102] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:26.485 [2024-11-26 17:27:26.975287] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58513 ] 00:30:26.485 [2024-11-26 17:27:27.153019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:26.743 [2024-11-26 17:27:27.330428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.743 [2024-11-26 17:27:27.330433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.677 17:27:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.677 17:27:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:27.677 17:27:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:27.934 Malloc0 00:30:27.934 17:27:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:28.192 Malloc1 00:30:28.476 17:27:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:28.476 17:27:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:28.476 17:27:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:28.476 17:27:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:28.477 17:27:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:30:28.733 /dev/nbd0 00:30:28.733 17:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:28.733 17:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:28.733 1+0 records in 00:30:28.733 1+0 records out 00:30:28.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404038 s, 10.1 MB/s 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:28.733 17:27:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:28.733 17:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:28.733 17:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:28.733 17:27:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:30:28.991 /dev/nbd1 00:30:28.991 17:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:28.991 17:27:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:28.991 1+0 records in 00:30:28.991 1+0 records out 00:30:28.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314301 s, 13.0 MB/s 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:28.991 17:27:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:28.991 17:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:28.991 17:27:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:28.991 17:27:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:28.991 17:27:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:28.991 17:27:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:29.249 17:27:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:29.249 { 00:30:29.249 "nbd_device": "/dev/nbd0", 00:30:29.249 "bdev_name": "Malloc0" 00:30:29.249 }, 00:30:29.249 { 00:30:29.249 "nbd_device": "/dev/nbd1", 00:30:29.249 "bdev_name": "Malloc1" 00:30:29.249 } 00:30:29.249 ]' 00:30:29.249 17:27:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:29.249 { 00:30:29.249 "nbd_device": "/dev/nbd0", 00:30:29.249 "bdev_name": "Malloc0" 00:30:29.249 }, 00:30:29.249 { 00:30:29.249 "nbd_device": "/dev/nbd1", 00:30:29.249 "bdev_name": "Malloc1" 00:30:29.249 } 00:30:29.249 ]' 00:30:29.249 17:27:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:29.508 /dev/nbd1' 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:29.508 /dev/nbd1' 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:30:29.508 256+0 records in 00:30:29.508 256+0 records out 00:30:29.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065339 s, 160 MB/s 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:29.508 17:27:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:29.508 256+0 records in 00:30:29.508 256+0 records out 00:30:29.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253978 s, 41.3 MB/s 00:30:29.508 17:27:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:29.508 17:27:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:29.508 256+0 records in 00:30:29.508 256+0 records out 00:30:29.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266185 s, 39.4 MB/s 00:30:29.508 17:27:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:29.509 17:27:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:29.767 17:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:29.767 17:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:29.767 17:27:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:29.767 17:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:29.767 17:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:29.767 17:27:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:29.767 17:27:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:29.767 17:27:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:29.767 17:27:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:29.768 17:27:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:30.335 17:27:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:30.593 17:27:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:30.593 17:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:30:30.593 17:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:30.593 17:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:30:30.593 17:27:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:30:30.593 17:27:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:30:30.593 17:27:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:30:30.593 17:27:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:30.593 17:27:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:30:30.593 17:27:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:30:31.160 17:27:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:30:32.534 [2024-11-26 17:27:33.001011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:32.534 [2024-11-26 17:27:33.139318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.534 [2024-11-26 17:27:33.139327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.813 [2024-11-26 17:27:33.368368] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:30:32.813 [2024-11-26 17:27:33.368473] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:30:34.186 17:27:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:30:34.186 spdk_app_start Round 1 00:30:34.186 17:27:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:30:34.186 17:27:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58513 /var/tmp/spdk-nbd.sock 00:30:34.186 17:27:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58513 ']' 00:30:34.186 17:27:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:34.186 17:27:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:34.186 17:27:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:34.186 17:27:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.186 17:27:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:34.444 17:27:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:34.444 17:27:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:34.444 17:27:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:34.702 Malloc0 00:30:34.702 17:27:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:34.961 Malloc1 00:30:34.961 17:27:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:34.961 17:27:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:30:35.528 /dev/nbd0 00:30:35.528 17:27:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:35.528 17:27:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:35.528 1+0 records in 00:30:35.528 1+0 records out 00:30:35.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283917 s, 14.4 MB/s 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:35.528 17:27:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:35.528 17:27:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:35.528 17:27:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:35.528 17:27:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:30:35.528 /dev/nbd1 00:30:35.784 17:27:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:35.785 17:27:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:35.785 1+0 records in 00:30:35.785 1+0 records out 00:30:35.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339728 s, 12.1 MB/s 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:35.785 17:27:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:35.785 17:27:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:35.785 17:27:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:35.785 17:27:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:35.785 17:27:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:35.785 17:27:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:36.041 17:27:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:36.041 { 00:30:36.041 "nbd_device": "/dev/nbd0", 00:30:36.041 "bdev_name": "Malloc0" 00:30:36.041 }, 00:30:36.041 { 00:30:36.041 "nbd_device": "/dev/nbd1", 00:30:36.041 "bdev_name": "Malloc1" 00:30:36.041 } 00:30:36.041 ]' 00:30:36.041 17:27:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:36.041 17:27:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:36.041 { 00:30:36.041 "nbd_device": "/dev/nbd0", 00:30:36.041 "bdev_name": "Malloc0" 00:30:36.041 }, 00:30:36.041 { 00:30:36.041 "nbd_device": "/dev/nbd1", 00:30:36.041 "bdev_name": "Malloc1" 00:30:36.041 } 00:30:36.042 ]' 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:36.042 /dev/nbd1' 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:36.042 /dev/nbd1' 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:30:36.042 256+0 records in 00:30:36.042 256+0 records out 00:30:36.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630572 s, 166 MB/s 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:36.042 256+0 records in 00:30:36.042 256+0 records out 00:30:36.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221337 s, 47.4 MB/s 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:36.042 256+0 records in 00:30:36.042 256+0 records out 00:30:36.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262734 s, 39.9 MB/s 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:36.042 17:27:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:36.300 17:27:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:36.300 17:27:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:36.300 17:27:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:36.300 17:27:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:36.300 17:27:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:36.300 17:27:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:36.300 17:27:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:36.300 17:27:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:36.300 17:27:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:36.300 17:27:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:36.559 17:27:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:36.816 17:27:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:36.816 17:27:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:36.816 17:27:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:37.107 17:27:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:37.107 17:27:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:30:37.107 17:27:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:37.107 17:27:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:30:37.107 17:27:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:30:37.107 17:27:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:30:37.107 17:27:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:30:37.107 17:27:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:37.107 17:27:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:30:37.107 17:27:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:30:37.373 17:27:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:30:38.745 [2024-11-26 17:27:39.407108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:39.003 [2024-11-26 17:27:39.545492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.003 [2024-11-26 17:27:39.545493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.261 [2024-11-26 17:27:39.778083] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:30:39.261 [2024-11-26 17:27:39.778182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:30:40.638 17:27:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:30:40.638 spdk_app_start Round 2 00:30:40.638 17:27:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:30:40.638 17:27:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58513 /var/tmp/spdk-nbd.sock 00:30:40.638 17:27:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58513 ']' 00:30:40.638 17:27:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:40.638 17:27:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:40.638 17:27:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:40.638 17:27:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.638 17:27:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:40.638 17:27:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.638 17:27:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:40.638 17:27:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:41.205 Malloc0 00:30:41.205 17:27:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:30:41.495 Malloc1 00:30:41.495 17:27:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:41.495 17:27:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:30:41.754 /dev/nbd0 00:30:41.754 17:27:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:41.754 17:27:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:41.754 1+0 records in 00:30:41.754 1+0 records out 00:30:41.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364466 s, 11.2 MB/s 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:41.754 17:27:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:41.754 17:27:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:41.754 17:27:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:41.754 17:27:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:30:42.012 /dev/nbd1 00:30:42.012 17:27:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:42.012 17:27:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:30:42.012 1+0 records in 00:30:42.012 1+0 records out 00:30:42.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432898 s, 9.5 MB/s 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:42.012 17:27:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:30:42.012 17:27:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:42.012 17:27:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:42.012 17:27:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:42.012 17:27:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:42.012 17:27:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:42.578 17:27:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:42.579 { 00:30:42.579 "nbd_device": "/dev/nbd0", 00:30:42.579 "bdev_name": "Malloc0" 00:30:42.579 }, 00:30:42.579 { 00:30:42.579 "nbd_device": "/dev/nbd1", 00:30:42.579 "bdev_name": "Malloc1" 00:30:42.579 } 00:30:42.579 ]' 00:30:42.579 17:27:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:42.579 { 00:30:42.579 "nbd_device": "/dev/nbd0", 00:30:42.579 "bdev_name": "Malloc0" 00:30:42.579 }, 00:30:42.579 { 00:30:42.579 "nbd_device": "/dev/nbd1", 00:30:42.579 "bdev_name": "Malloc1" 00:30:42.579 } 00:30:42.579 ]' 00:30:42.579 17:27:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:42.579 /dev/nbd1' 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:42.579 /dev/nbd1' 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:30:42.579 256+0 records in 00:30:42.579 256+0 records out 00:30:42.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140575 s, 74.6 MB/s 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:42.579 256+0 records in 00:30:42.579 256+0 records out 00:30:42.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212415 s, 49.4 MB/s 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:42.579 256+0 records in 00:30:42.579 256+0 records out 00:30:42.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286797 s, 36.6 MB/s 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:42.579 17:27:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:42.837 17:27:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:42.837 17:27:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:42.837 17:27:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:42.837 17:27:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:42.837 17:27:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:42.837 17:27:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:42.837 17:27:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:42.837 17:27:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:42.837 17:27:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:42.837 17:27:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:43.404 17:27:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:43.404 17:27:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:43.404 17:27:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:43.404 17:27:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:43.663 17:27:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:43.663 17:27:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:30:43.663 17:27:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:43.663 17:27:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:30:43.663 17:27:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:30:43.663 17:27:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:30:43.663 17:27:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:30:43.663 17:27:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:43.663 17:27:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:30:43.663 17:27:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:30:43.922 17:27:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:30:45.298 [2024-11-26 17:27:45.959627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:45.557 [2024-11-26 17:27:46.097626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.557 [2024-11-26 17:27:46.097635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.818 [2024-11-26 17:27:46.330944] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:30:45.818 [2024-11-26 17:27:46.331040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:30:47.220 17:27:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58513 /var/tmp/spdk-nbd.sock 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58513 ']' 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:47.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:30:47.220 17:27:47 event.app_repeat -- event/event.sh@39 -- # killprocess 58513 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58513 ']' 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58513 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58513 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:47.220 17:27:47 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:47.221 17:27:47 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58513' 00:30:47.221 killing process with pid 58513 00:30:47.221 17:27:47 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58513 00:30:47.221 17:27:47 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58513 00:30:48.596 spdk_app_start is called in Round 0. 00:30:48.596 Shutdown signal received, stop current app iteration 00:30:48.596 Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 reinitialization... 00:30:48.596 spdk_app_start is called in Round 1. 00:30:48.596 Shutdown signal received, stop current app iteration 00:30:48.596 Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 reinitialization... 00:30:48.596 spdk_app_start is called in Round 2. 00:30:48.596 Shutdown signal received, stop current app iteration 00:30:48.596 Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 reinitialization... 00:30:48.596 spdk_app_start is called in Round 3. 00:30:48.596 Shutdown signal received, stop current app iteration 00:30:48.596 17:27:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:30:48.596 17:27:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:30:48.596 00:30:48.596 real 0m22.195s 00:30:48.596 user 0m48.829s 00:30:48.596 sys 0m3.229s 00:30:48.596 17:27:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.596 17:27:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:30:48.596 ************************************ 00:30:48.596 END TEST app_repeat 00:30:48.596 ************************************ 00:30:48.596 17:27:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:30:48.596 17:27:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:30:48.596 17:27:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:48.596 17:27:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.596 17:27:49 event -- common/autotest_common.sh@10 -- # set +x 00:30:48.596 ************************************ 00:30:48.596 START TEST cpu_locks 00:30:48.596 ************************************ 00:30:48.596 17:27:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:30:48.596 * Looking for test storage... 00:30:48.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:30:48.596 17:27:49 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:48.596 17:27:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:30:48.596 17:27:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:48.855 17:27:49 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.855 17:27:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:30:48.855 17:27:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.855 17:27:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:48.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.855 --rc genhtml_branch_coverage=1 00:30:48.855 --rc genhtml_function_coverage=1 00:30:48.855 --rc genhtml_legend=1 00:30:48.855 --rc geninfo_all_blocks=1 00:30:48.855 --rc geninfo_unexecuted_blocks=1 00:30:48.855 00:30:48.855 ' 00:30:48.855 17:27:49 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:48.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.855 --rc genhtml_branch_coverage=1 00:30:48.855 --rc genhtml_function_coverage=1 00:30:48.855 --rc genhtml_legend=1 00:30:48.855 --rc geninfo_all_blocks=1 00:30:48.855 --rc geninfo_unexecuted_blocks=1 00:30:48.855 00:30:48.855 ' 00:30:48.856 17:27:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:48.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.856 --rc genhtml_branch_coverage=1 00:30:48.856 --rc genhtml_function_coverage=1 00:30:48.856 --rc genhtml_legend=1 00:30:48.856 --rc geninfo_all_blocks=1 00:30:48.856 --rc geninfo_unexecuted_blocks=1 00:30:48.856 00:30:48.856 ' 00:30:48.856 17:27:49 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:48.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.856 --rc genhtml_branch_coverage=1 00:30:48.856 --rc genhtml_function_coverage=1 00:30:48.856 --rc genhtml_legend=1 00:30:48.856 --rc geninfo_all_blocks=1 00:30:48.856 --rc geninfo_unexecuted_blocks=1 00:30:48.856 00:30:48.856 ' 00:30:48.856 17:27:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:30:48.856 17:27:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:30:48.856 17:27:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:30:48.856 17:27:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:30:48.856 17:27:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:48.856 17:27:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.856 17:27:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:48.856 ************************************ 00:30:48.856 START TEST default_locks 00:30:48.856 ************************************ 00:30:48.856 17:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:30:48.856 17:27:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58990 00:30:48.856 17:27:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58990 00:30:48.856 17:27:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:48.856 17:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58990 ']' 00:30:48.856 17:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.856 17:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.856 17:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.856 17:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.856 17:27:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:30:48.856 [2024-11-26 17:27:49.493910] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:48.856 [2024-11-26 17:27:49.494041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58990 ] 00:30:49.115 [2024-11-26 17:27:49.678334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.115 [2024-11-26 17:27:49.795928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.049 17:27:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.049 17:27:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:30:50.049 17:27:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58990 00:30:50.049 17:27:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58990 00:30:50.049 17:27:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58990 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58990 ']' 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58990 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58990 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58990' 00:30:50.645 killing process with pid 58990 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58990 00:30:50.645 17:27:51 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58990 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58990 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58990 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58990 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58990 ']' 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:53.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:30:53.926 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58990) - No such process 00:30:53.926 ERROR: process (pid: 58990) is no longer running 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:30:53.926 00:30:53.926 real 0m4.541s 00:30:53.926 user 0m4.527s 00:30:53.926 sys 0m0.700s 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:53.926 17:27:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:30:53.926 ************************************ 00:30:53.926 END TEST default_locks 00:30:53.926 ************************************ 00:30:53.926 17:27:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:30:53.926 17:27:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:53.926 17:27:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:53.926 17:27:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:53.926 ************************************ 00:30:53.926 START TEST default_locks_via_rpc 00:30:53.926 ************************************ 00:30:53.926 17:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:30:53.926 17:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59070 00:30:53.926 17:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59070 00:30:53.926 17:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59070 ']' 00:30:53.926 17:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.926 17:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:53.926 17:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.926 17:27:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:53.926 17:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:53.926 17:27:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:53.926 [2024-11-26 17:27:54.076136] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:53.926 [2024-11-26 17:27:54.076306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59070 ] 00:30:53.926 [2024-11-26 17:27:54.250445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.926 [2024-11-26 17:27:54.387263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59070 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59070 00:30:54.863 17:27:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59070 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59070 ']' 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59070 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59070 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:55.433 killing process with pid 59070 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59070' 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59070 00:30:55.433 17:27:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59070 00:30:58.725 00:30:58.725 real 0m4.714s 00:30:58.725 user 0m4.753s 00:30:58.725 sys 0m0.697s 00:30:58.725 17:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.725 17:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:58.725 ************************************ 00:30:58.725 END TEST default_locks_via_rpc 00:30:58.725 ************************************ 00:30:58.725 17:27:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:30:58.725 17:27:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:58.725 17:27:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.725 17:27:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:30:58.725 ************************************ 00:30:58.725 START TEST non_locking_app_on_locked_coremask 00:30:58.725 ************************************ 00:30:58.725 17:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:30:58.725 17:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59152 00:30:58.725 17:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:30:58.725 17:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59152 /var/tmp/spdk.sock 00:30:58.725 17:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59152 ']' 00:30:58.725 17:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.725 17:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.725 17:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.725 17:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.725 17:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:58.725 [2024-11-26 17:27:58.840095] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:58.725 [2024-11-26 17:27:58.840238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59152 ] 00:30:58.725 [2024-11-26 17:27:59.022237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.725 [2024-11-26 17:27:59.162460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59182 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59182 /var/tmp/spdk2.sock 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59182 ']' 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:59.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:59.676 17:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:30:59.676 [2024-11-26 17:28:00.286192] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:30:59.676 [2024-11-26 17:28:00.286344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:30:59.962 [2024-11-26 17:28:00.472185] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:30:59.962 [2024-11-26 17:28:00.472289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.220 [2024-11-26 17:28:00.751306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.755 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.755 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:31:02.755 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59152 00:31:02.755 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59152 00:31:02.755 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:03.012 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59152 00:31:03.013 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59152 ']' 00:31:03.013 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59152 00:31:03.013 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:31:03.013 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.013 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59152 00:31:03.013 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:03.013 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:03.013 killing process with pid 59152 00:31:03.013 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59152' 00:31:03.013 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59152 00:31:03.013 17:28:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59152 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59182 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59182 ']' 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59182 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59182 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:09.601 killing process with pid 59182 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59182' 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59182 00:31:09.601 17:28:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59182 00:31:11.505 00:31:11.505 real 0m13.381s 00:31:11.505 user 0m13.779s 00:31:11.505 sys 0m1.361s 00:31:11.505 17:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.505 17:28:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:11.505 ************************************ 00:31:11.505 END TEST non_locking_app_on_locked_coremask 00:31:11.505 ************************************ 00:31:11.506 17:28:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:31:11.506 17:28:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:11.506 17:28:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.506 17:28:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:31:11.506 ************************************ 00:31:11.506 START TEST locking_app_on_unlocked_coremask 00:31:11.506 ************************************ 00:31:11.506 17:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:31:11.506 17:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59341 00:31:11.506 17:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59341 /var/tmp/spdk.sock 00:31:11.506 17:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:31:11.506 17:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59341 ']' 00:31:11.506 17:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.506 17:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:11.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.506 17:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.506 17:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:11.506 17:28:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:11.765 [2024-11-26 17:28:12.297921] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:11.765 [2024-11-26 17:28:12.298090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59341 ] 00:31:12.023 [2024-11-26 17:28:12.479888] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:31:12.023 [2024-11-26 17:28:12.479978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.023 [2024-11-26 17:28:12.622180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59363 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59363 /var/tmp/spdk2.sock 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59363 ']' 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:13.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:13.434 17:28:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:31:13.434 [2024-11-26 17:28:13.773958] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:13.434 [2024-11-26 17:28:13.774101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59363 ] 00:31:13.434 [2024-11-26 17:28:13.960527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.692 [2024-11-26 17:28:14.247860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.218 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.218 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:31:16.218 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59363 00:31:16.218 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59363 00:31:16.218 17:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59341 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59341 ']' 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59341 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59341 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:16.786 killing process with pid 59341 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59341' 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59341 00:31:16.786 17:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59341 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59363 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59363 ']' 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59363 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59363 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:23.359 killing process with pid 59363 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59363' 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59363 00:31:23.359 17:28:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59363 00:31:25.339 00:31:25.339 real 0m13.539s 00:31:25.339 user 0m14.032s 00:31:25.339 sys 0m1.446s 00:31:25.339 17:28:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:25.339 17:28:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:25.339 ************************************ 00:31:25.340 END TEST locking_app_on_unlocked_coremask 00:31:25.340 ************************************ 00:31:25.340 17:28:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:31:25.340 17:28:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:25.340 17:28:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:25.340 17:28:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:31:25.340 ************************************ 00:31:25.340 START TEST locking_app_on_locked_coremask 00:31:25.340 ************************************ 00:31:25.340 17:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:31:25.340 17:28:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:25.340 17:28:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59528 00:31:25.340 17:28:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59528 /var/tmp/spdk.sock 00:31:25.340 17:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59528 ']' 00:31:25.340 17:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.340 17:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:25.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.340 17:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.340 17:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:25.340 17:28:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:25.340 [2024-11-26 17:28:25.877410] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:25.340 [2024-11-26 17:28:25.877551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59528 ] 00:31:25.599 [2024-11-26 17:28:26.058272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.599 [2024-11-26 17:28:26.190148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59550 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59550 /var/tmp/spdk2.sock 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59550 /var/tmp/spdk2.sock 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59550 /var/tmp/spdk2.sock 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59550 ']' 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.979 17:28:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:26.979 [2024-11-26 17:28:27.367241] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:26.979 [2024-11-26 17:28:27.367410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59550 ] 00:31:26.979 [2024-11-26 17:28:27.560770] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59528 has claimed it. 00:31:26.979 [2024-11-26 17:28:27.560857] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:31:27.548 ERROR: process (pid: 59550) is no longer running 00:31:27.548 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59550) - No such process 00:31:27.548 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.548 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:31:27.548 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:31:27.548 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:27.548 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:27.548 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:27.548 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59528 00:31:27.548 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59528 00:31:27.548 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:31:27.807 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59528 00:31:27.807 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59528 ']' 00:31:27.807 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59528 00:31:27.807 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:31:27.807 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.807 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59528 00:31:28.066 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:28.066 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:28.066 killing process with pid 59528 00:31:28.066 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59528' 00:31:28.066 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59528 00:31:28.066 17:28:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59528 00:31:31.358 00:31:31.358 real 0m5.563s 00:31:31.358 user 0m5.772s 00:31:31.358 sys 0m0.922s 00:31:31.358 17:28:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.358 17:28:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:31.358 ************************************ 00:31:31.358 END TEST locking_app_on_locked_coremask 00:31:31.358 ************************************ 00:31:31.358 17:28:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:31:31.358 17:28:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:31.358 17:28:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.358 17:28:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:31:31.358 ************************************ 00:31:31.358 START TEST locking_overlapped_coremask 00:31:31.358 ************************************ 00:31:31.358 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:31:31.358 17:28:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59625 00:31:31.358 17:28:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:31:31.358 17:28:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59625 /var/tmp/spdk.sock 00:31:31.358 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59625 ']' 00:31:31.358 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.358 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.358 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.358 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.358 17:28:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:31.358 [2024-11-26 17:28:31.542918] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:31.358 [2024-11-26 17:28:31.543078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59625 ] 00:31:31.358 [2024-11-26 17:28:31.735754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:31.358 [2024-11-26 17:28:31.875703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.358 [2024-11-26 17:28:31.875862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.358 [2024-11-26 17:28:31.875915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59643 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59643 /var/tmp/spdk2.sock 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59643 /var/tmp/spdk2.sock 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59643 /var/tmp/spdk2.sock 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59643 ']' 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.297 17:28:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:32.557 [2024-11-26 17:28:32.997849] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:32.557 [2024-11-26 17:28:32.998013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59643 ] 00:31:32.557 [2024-11-26 17:28:33.195307] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59625 has claimed it. 00:31:32.557 [2024-11-26 17:28:33.195401] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:31:33.127 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59643) - No such process 00:31:33.127 ERROR: process (pid: 59643) is no longer running 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59625 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59625 ']' 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59625 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59625 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:33.127 killing process with pid 59625 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59625' 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59625 00:31:33.127 17:28:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59625 00:31:36.449 00:31:36.449 real 0m5.058s 00:31:36.449 user 0m13.862s 00:31:36.449 sys 0m0.675s 00:31:36.449 17:28:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.449 17:28:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:31:36.449 ************************************ 00:31:36.449 END TEST locking_overlapped_coremask 00:31:36.449 ************************************ 00:31:36.449 17:28:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:31:36.449 17:28:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:36.449 17:28:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.449 17:28:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:31:36.449 ************************************ 00:31:36.449 START TEST locking_overlapped_coremask_via_rpc 00:31:36.449 ************************************ 00:31:36.449 17:28:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:31:36.449 17:28:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59718 00:31:36.449 17:28:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:31:36.449 17:28:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59718 /var/tmp/spdk.sock 00:31:36.449 17:28:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59718 ']' 00:31:36.449 17:28:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.449 17:28:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.450 17:28:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.450 17:28:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.450 17:28:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:36.450 [2024-11-26 17:28:36.670592] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:36.450 [2024-11-26 17:28:36.670769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:31:36.450 [2024-11-26 17:28:36.857769] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:31:36.450 [2024-11-26 17:28:36.857824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:36.450 [2024-11-26 17:28:36.981687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.450 [2024-11-26 17:28:36.981842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.450 [2024-11-26 17:28:36.981878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59736 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59736 /var/tmp/spdk2.sock 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59736 ']' 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:37.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:37.386 17:28:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:37.386 [2024-11-26 17:28:38.041981] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:37.386 [2024-11-26 17:28:38.042122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59736 ] 00:31:37.645 [2024-11-26 17:28:38.230340] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:31:37.645 [2024-11-26 17:28:38.233573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:37.903 [2024-11-26 17:28:38.497682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:37.903 [2024-11-26 17:28:38.501722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.903 [2024-11-26 17:28:38.501740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:40.461 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:40.462 [2024-11-26 17:28:40.707717] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59718 has claimed it. 00:31:40.462 request: 00:31:40.462 { 00:31:40.462 "method": "framework_enable_cpumask_locks", 00:31:40.462 "req_id": 1 00:31:40.462 } 00:31:40.462 Got JSON-RPC error response 00:31:40.462 response: 00:31:40.462 { 00:31:40.462 "code": -32603, 00:31:40.462 "message": "Failed to claim CPU core: 2" 00:31:40.462 } 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59718 /var/tmp/spdk.sock 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59718 ']' 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:40.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59736 /var/tmp/spdk2.sock 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59736 ']' 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.462 17:28:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:40.722 17:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.722 17:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:40.722 17:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:31:40.722 17:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:31:40.722 17:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:31:40.722 17:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:31:40.722 00:31:40.722 real 0m4.688s 00:31:40.722 user 0m1.478s 00:31:40.722 sys 0m0.218s 00:31:40.722 17:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.722 17:28:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:40.722 ************************************ 00:31:40.722 END TEST locking_overlapped_coremask_via_rpc 00:31:40.722 ************************************ 00:31:40.722 17:28:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:31:40.722 17:28:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59718 ]] 00:31:40.722 17:28:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59718 00:31:40.722 17:28:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59718 ']' 00:31:40.722 17:28:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59718 00:31:40.722 17:28:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:31:40.722 17:28:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:40.722 17:28:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59718 00:31:40.722 killing process with pid 59718 00:31:40.722 17:28:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:40.722 17:28:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:40.722 17:28:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59718' 00:31:40.722 17:28:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59718 00:31:40.722 17:28:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59718 00:31:44.013 17:28:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59736 ]] 00:31:44.013 17:28:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59736 00:31:44.013 17:28:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59736 ']' 00:31:44.013 17:28:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59736 00:31:44.013 killing process with pid 59736 00:31:44.013 17:28:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:31:44.013 17:28:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:44.013 17:28:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59736 00:31:44.013 17:28:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:44.013 17:28:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:44.013 17:28:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59736' 00:31:44.013 17:28:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59736 00:31:44.013 17:28:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59736 00:31:46.551 17:28:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:31:46.551 17:28:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:31:46.551 17:28:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59718 ]] 00:31:46.551 17:28:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59718 00:31:46.551 17:28:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59718 ']' 00:31:46.551 17:28:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59718 00:31:46.551 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59718) - No such process 00:31:46.551 Process with pid 59718 is not found 00:31:46.551 17:28:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59718 is not found' 00:31:46.551 17:28:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59736 ]] 00:31:46.551 17:28:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59736 00:31:46.551 17:28:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59736 ']' 00:31:46.551 17:28:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59736 00:31:46.551 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59736) - No such process 00:31:46.551 Process with pid 59736 is not found 00:31:46.551 17:28:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59736 is not found' 00:31:46.551 17:28:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:31:46.551 00:31:46.551 real 0m57.616s 00:31:46.551 user 1m37.451s 00:31:46.551 sys 0m7.218s 00:31:46.551 17:28:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.551 17:28:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:31:46.551 ************************************ 00:31:46.551 END TEST cpu_locks 00:31:46.551 ************************************ 00:31:46.551 00:31:46.551 real 1m32.380s 00:31:46.551 user 2m49.032s 00:31:46.551 sys 0m11.649s 00:31:46.551 17:28:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.551 17:28:46 event -- common/autotest_common.sh@10 -- # set +x 00:31:46.551 ************************************ 00:31:46.551 END TEST event 00:31:46.551 ************************************ 00:31:46.551 17:28:46 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:31:46.551 17:28:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:46.551 17:28:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.551 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:31:46.551 ************************************ 00:31:46.551 START TEST thread 00:31:46.551 ************************************ 00:31:46.551 17:28:46 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:31:46.551 * Looking for test storage... 00:31:46.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:46.552 17:28:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:46.552 17:28:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:46.552 17:28:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:46.552 17:28:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:31:46.552 17:28:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:31:46.552 17:28:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:31:46.552 17:28:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:31:46.552 17:28:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:31:46.552 17:28:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:31:46.552 17:28:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:31:46.552 17:28:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:46.552 17:28:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:31:46.552 17:28:47 thread -- scripts/common.sh@345 -- # : 1 00:31:46.552 17:28:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:46.552 17:28:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:46.552 17:28:47 thread -- scripts/common.sh@365 -- # decimal 1 00:31:46.552 17:28:47 thread -- scripts/common.sh@353 -- # local d=1 00:31:46.552 17:28:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:46.552 17:28:47 thread -- scripts/common.sh@355 -- # echo 1 00:31:46.552 17:28:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:31:46.552 17:28:47 thread -- scripts/common.sh@366 -- # decimal 2 00:31:46.552 17:28:47 thread -- scripts/common.sh@353 -- # local d=2 00:31:46.552 17:28:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:46.552 17:28:47 thread -- scripts/common.sh@355 -- # echo 2 00:31:46.552 17:28:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:31:46.552 17:28:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:46.552 17:28:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:46.552 17:28:47 thread -- scripts/common.sh@368 -- # return 0 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:46.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.552 --rc genhtml_branch_coverage=1 00:31:46.552 --rc genhtml_function_coverage=1 00:31:46.552 --rc genhtml_legend=1 00:31:46.552 --rc geninfo_all_blocks=1 00:31:46.552 --rc geninfo_unexecuted_blocks=1 00:31:46.552 00:31:46.552 ' 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:46.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.552 --rc genhtml_branch_coverage=1 00:31:46.552 --rc genhtml_function_coverage=1 00:31:46.552 --rc genhtml_legend=1 00:31:46.552 --rc geninfo_all_blocks=1 00:31:46.552 --rc geninfo_unexecuted_blocks=1 00:31:46.552 00:31:46.552 ' 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:46.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.552 --rc genhtml_branch_coverage=1 00:31:46.552 --rc genhtml_function_coverage=1 00:31:46.552 --rc genhtml_legend=1 00:31:46.552 --rc geninfo_all_blocks=1 00:31:46.552 --rc geninfo_unexecuted_blocks=1 00:31:46.552 00:31:46.552 ' 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:46.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.552 --rc genhtml_branch_coverage=1 00:31:46.552 --rc genhtml_function_coverage=1 00:31:46.552 --rc genhtml_legend=1 00:31:46.552 --rc geninfo_all_blocks=1 00:31:46.552 --rc geninfo_unexecuted_blocks=1 00:31:46.552 00:31:46.552 ' 00:31:46.552 17:28:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.552 17:28:47 thread -- common/autotest_common.sh@10 -- # set +x 00:31:46.552 ************************************ 00:31:46.552 START TEST thread_poller_perf 00:31:46.552 ************************************ 00:31:46.552 17:28:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:31:46.552 [2024-11-26 17:28:47.169616] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:46.552 [2024-11-26 17:28:47.169765] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59942 ] 00:31:46.812 [2024-11-26 17:28:47.349895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.107 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:31:47.107 [2024-11-26 17:28:47.519189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.484 [2024-11-26T17:28:49.179Z] ====================================== 00:31:48.484 [2024-11-26T17:28:49.179Z] busy:2306562534 (cyc) 00:31:48.484 [2024-11-26T17:28:49.179Z] total_run_count: 346000 00:31:48.484 [2024-11-26T17:28:49.179Z] tsc_hz: 2290000000 (cyc) 00:31:48.484 [2024-11-26T17:28:49.179Z] ====================================== 00:31:48.484 [2024-11-26T17:28:49.179Z] poller_cost: 6666 (cyc), 2910 (nsec) 00:31:48.484 00:31:48.484 real 0m1.630s 00:31:48.484 user 0m1.441s 00:31:48.484 sys 0m0.081s 00:31:48.484 17:28:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:48.484 17:28:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:31:48.484 ************************************ 00:31:48.484 END TEST thread_poller_perf 00:31:48.484 ************************************ 00:31:48.484 17:28:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:31:48.484 17:28:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:31:48.484 17:28:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:48.484 17:28:48 thread -- common/autotest_common.sh@10 -- # set +x 00:31:48.484 ************************************ 00:31:48.484 START TEST thread_poller_perf 00:31:48.484 ************************************ 00:31:48.484 17:28:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:31:48.484 [2024-11-26 17:28:48.856699] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:48.484 [2024-11-26 17:28:48.856863] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59979 ] 00:31:48.484 [2024-11-26 17:28:49.050452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.484 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:31:48.484 [2024-11-26 17:28:49.162369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.870 [2024-11-26T17:28:50.565Z] ====================================== 00:31:49.870 [2024-11-26T17:28:50.565Z] busy:2293872592 (cyc) 00:31:49.870 [2024-11-26T17:28:50.565Z] total_run_count: 4715000 00:31:49.870 [2024-11-26T17:28:50.565Z] tsc_hz: 2290000000 (cyc) 00:31:49.870 [2024-11-26T17:28:50.565Z] ====================================== 00:31:49.870 [2024-11-26T17:28:50.565Z] poller_cost: 486 (cyc), 212 (nsec) 00:31:49.870 00:31:49.870 real 0m1.585s 00:31:49.870 user 0m1.380s 00:31:49.870 sys 0m0.098s 00:31:49.870 17:28:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.870 17:28:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:31:49.870 ************************************ 00:31:49.870 END TEST thread_poller_perf 00:31:49.870 ************************************ 00:31:49.870 17:28:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:31:49.870 00:31:49.870 real 0m3.549s 00:31:49.870 user 0m2.975s 00:31:49.870 sys 0m0.369s 00:31:49.870 17:28:50 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.870 17:28:50 thread -- common/autotest_common.sh@10 -- # set +x 00:31:49.870 ************************************ 00:31:49.870 END TEST thread 00:31:49.870 ************************************ 00:31:49.870 17:28:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:31:49.870 17:28:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:31:49.870 17:28:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:49.870 17:28:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.870 17:28:50 -- common/autotest_common.sh@10 -- # set +x 00:31:49.870 ************************************ 00:31:49.871 START TEST app_cmdline 00:31:49.871 ************************************ 00:31:49.871 17:28:50 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:31:50.130 * Looking for test storage... 00:31:50.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:31:50.130 17:28:50 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:50.130 17:28:50 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:50.130 17:28:50 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:31:50.130 17:28:50 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.130 17:28:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:50.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.131 --rc genhtml_branch_coverage=1 00:31:50.131 --rc genhtml_function_coverage=1 00:31:50.131 --rc genhtml_legend=1 00:31:50.131 --rc geninfo_all_blocks=1 00:31:50.131 --rc geninfo_unexecuted_blocks=1 00:31:50.131 00:31:50.131 ' 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:50.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.131 --rc genhtml_branch_coverage=1 00:31:50.131 --rc genhtml_function_coverage=1 00:31:50.131 --rc genhtml_legend=1 00:31:50.131 --rc geninfo_all_blocks=1 00:31:50.131 --rc geninfo_unexecuted_blocks=1 00:31:50.131 00:31:50.131 ' 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:50.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.131 --rc genhtml_branch_coverage=1 00:31:50.131 --rc genhtml_function_coverage=1 00:31:50.131 --rc genhtml_legend=1 00:31:50.131 --rc geninfo_all_blocks=1 00:31:50.131 --rc geninfo_unexecuted_blocks=1 00:31:50.131 00:31:50.131 ' 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:50.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.131 --rc genhtml_branch_coverage=1 00:31:50.131 --rc genhtml_function_coverage=1 00:31:50.131 --rc genhtml_legend=1 00:31:50.131 --rc geninfo_all_blocks=1 00:31:50.131 --rc geninfo_unexecuted_blocks=1 00:31:50.131 00:31:50.131 ' 00:31:50.131 17:28:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:31:50.131 17:28:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60068 00:31:50.131 17:28:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60068 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60068 ']' 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.131 17:28:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.131 17:28:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:31:50.131 [2024-11-26 17:28:50.814987] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:50.131 [2024-11-26 17:28:50.815107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60068 ] 00:31:50.391 [2024-11-26 17:28:50.989994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.650 [2024-11-26 17:28:51.109696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.590 17:28:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:51.590 17:28:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:31:51.590 17:28:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:31:51.590 { 00:31:51.590 "version": "SPDK v25.01-pre git sha1 c86e5b182", 00:31:51.590 "fields": { 00:31:51.590 "major": 25, 00:31:51.590 "minor": 1, 00:31:51.590 "patch": 0, 00:31:51.590 "suffix": "-pre", 00:31:51.590 "commit": "c86e5b182" 00:31:51.590 } 00:31:51.590 } 00:31:51.590 17:28:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:31:51.590 17:28:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:31:51.590 17:28:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:31:51.590 17:28:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:31:51.590 17:28:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:31:51.590 17:28:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.590 17:28:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:31:51.591 17:28:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:31:51.591 17:28:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:31:51.591 17:28:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.850 17:28:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:31:51.850 17:28:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:31:51.850 17:28:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:51.850 17:28:52 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:31:51.850 request: 00:31:51.850 { 00:31:51.850 "method": "env_dpdk_get_mem_stats", 00:31:51.850 "req_id": 1 00:31:51.850 } 00:31:51.850 Got JSON-RPC error response 00:31:51.850 response: 00:31:51.850 { 00:31:51.850 "code": -32601, 00:31:51.850 "message": "Method not found" 00:31:51.850 } 00:31:52.110 17:28:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:31:52.110 17:28:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:52.110 17:28:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:52.110 17:28:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:52.110 17:28:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60068 00:31:52.110 17:28:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60068 ']' 00:31:52.110 17:28:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60068 00:31:52.110 17:28:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:31:52.110 17:28:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.111 17:28:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60068 00:31:52.111 17:28:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.111 killing process with pid 60068 00:31:52.111 17:28:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.111 17:28:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60068' 00:31:52.111 17:28:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 60068 00:31:52.111 17:28:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 60068 00:31:54.659 00:31:54.659 real 0m4.681s 00:31:54.659 user 0m4.954s 00:31:54.659 sys 0m0.607s 00:31:54.659 17:28:55 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.659 17:28:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:31:54.659 ************************************ 00:31:54.659 END TEST app_cmdline 00:31:54.659 ************************************ 00:31:54.659 17:28:55 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:31:54.659 17:28:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:54.659 17:28:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.659 17:28:55 -- common/autotest_common.sh@10 -- # set +x 00:31:54.659 ************************************ 00:31:54.659 START TEST version 00:31:54.659 ************************************ 00:31:54.659 17:28:55 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:31:54.659 * Looking for test storage... 00:31:54.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:31:54.918 17:28:55 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:54.918 17:28:55 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:54.918 17:28:55 version -- common/autotest_common.sh@1693 -- # lcov --version 00:31:54.918 17:28:55 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:54.918 17:28:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.918 17:28:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.918 17:28:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.918 17:28:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.918 17:28:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.918 17:28:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.918 17:28:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.918 17:28:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.918 17:28:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.918 17:28:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.918 17:28:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.918 17:28:55 version -- scripts/common.sh@344 -- # case "$op" in 00:31:54.918 17:28:55 version -- scripts/common.sh@345 -- # : 1 00:31:54.918 17:28:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.918 17:28:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.918 17:28:55 version -- scripts/common.sh@365 -- # decimal 1 00:31:54.918 17:28:55 version -- scripts/common.sh@353 -- # local d=1 00:31:54.918 17:28:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.918 17:28:55 version -- scripts/common.sh@355 -- # echo 1 00:31:54.918 17:28:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.918 17:28:55 version -- scripts/common.sh@366 -- # decimal 2 00:31:54.918 17:28:55 version -- scripts/common.sh@353 -- # local d=2 00:31:54.918 17:28:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.918 17:28:55 version -- scripts/common.sh@355 -- # echo 2 00:31:54.918 17:28:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.918 17:28:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.918 17:28:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.918 17:28:55 version -- scripts/common.sh@368 -- # return 0 00:31:54.918 17:28:55 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.918 17:28:55 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:54.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.918 --rc genhtml_branch_coverage=1 00:31:54.918 --rc genhtml_function_coverage=1 00:31:54.918 --rc genhtml_legend=1 00:31:54.918 --rc geninfo_all_blocks=1 00:31:54.918 --rc geninfo_unexecuted_blocks=1 00:31:54.918 00:31:54.918 ' 00:31:54.918 17:28:55 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:54.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.918 --rc genhtml_branch_coverage=1 00:31:54.918 --rc genhtml_function_coverage=1 00:31:54.918 --rc genhtml_legend=1 00:31:54.918 --rc geninfo_all_blocks=1 00:31:54.918 --rc geninfo_unexecuted_blocks=1 00:31:54.918 00:31:54.918 ' 00:31:54.918 17:28:55 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:54.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.918 --rc genhtml_branch_coverage=1 00:31:54.918 --rc genhtml_function_coverage=1 00:31:54.918 --rc genhtml_legend=1 00:31:54.918 --rc geninfo_all_blocks=1 00:31:54.918 --rc geninfo_unexecuted_blocks=1 00:31:54.918 00:31:54.918 ' 00:31:54.918 17:28:55 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:54.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.918 --rc genhtml_branch_coverage=1 00:31:54.918 --rc genhtml_function_coverage=1 00:31:54.918 --rc genhtml_legend=1 00:31:54.918 --rc geninfo_all_blocks=1 00:31:54.918 --rc geninfo_unexecuted_blocks=1 00:31:54.918 00:31:54.918 ' 00:31:54.918 17:28:55 version -- app/version.sh@17 -- # get_header_version major 00:31:54.918 17:28:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:54.918 17:28:55 version -- app/version.sh@14 -- # cut -f2 00:31:54.918 17:28:55 version -- app/version.sh@14 -- # tr -d '"' 00:31:54.918 17:28:55 version -- app/version.sh@17 -- # major=25 00:31:54.918 17:28:55 version -- app/version.sh@18 -- # get_header_version minor 00:31:54.918 17:28:55 version -- app/version.sh@14 -- # tr -d '"' 00:31:54.918 17:28:55 version -- app/version.sh@14 -- # cut -f2 00:31:54.918 17:28:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:54.918 17:28:55 version -- app/version.sh@18 -- # minor=1 00:31:54.918 17:28:55 version -- app/version.sh@19 -- # get_header_version patch 00:31:54.918 17:28:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:54.918 17:28:55 version -- app/version.sh@14 -- # tr -d '"' 00:31:54.918 17:28:55 version -- app/version.sh@14 -- # cut -f2 00:31:54.918 17:28:55 version -- app/version.sh@19 -- # patch=0 00:31:54.918 17:28:55 version -- app/version.sh@20 -- # get_header_version suffix 00:31:54.918 17:28:55 version -- app/version.sh@14 -- # tr -d '"' 00:31:54.918 17:28:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:54.918 17:28:55 version -- app/version.sh@14 -- # cut -f2 00:31:54.918 17:28:55 version -- app/version.sh@20 -- # suffix=-pre 00:31:54.918 17:28:55 version -- app/version.sh@22 -- # version=25.1 00:31:54.918 17:28:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:31:54.918 17:28:55 version -- app/version.sh@28 -- # version=25.1rc0 00:31:54.918 17:28:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:31:54.918 17:28:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:31:54.918 17:28:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:31:54.918 17:28:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:31:54.918 00:31:54.918 real 0m0.309s 00:31:54.918 user 0m0.188s 00:31:54.918 sys 0m0.174s 00:31:54.918 17:28:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:54.918 17:28:55 version -- common/autotest_common.sh@10 -- # set +x 00:31:54.918 ************************************ 00:31:54.919 END TEST version 00:31:54.919 ************************************ 00:31:54.919 17:28:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:31:54.919 17:28:55 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:31:54.919 17:28:55 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:31:54.919 17:28:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:54.919 17:28:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:54.919 17:28:55 -- common/autotest_common.sh@10 -- # set +x 00:31:55.178 ************************************ 00:31:55.178 START TEST bdev_raid 00:31:55.178 ************************************ 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:31:55.178 * Looking for test storage... 00:31:55.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@345 -- # : 1 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.178 17:28:55 bdev_raid -- scripts/common.sh@368 -- # return 0 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:55.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.178 --rc genhtml_branch_coverage=1 00:31:55.178 --rc genhtml_function_coverage=1 00:31:55.178 --rc genhtml_legend=1 00:31:55.178 --rc geninfo_all_blocks=1 00:31:55.178 --rc geninfo_unexecuted_blocks=1 00:31:55.178 00:31:55.178 ' 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:55.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.178 --rc genhtml_branch_coverage=1 00:31:55.178 --rc genhtml_function_coverage=1 00:31:55.178 --rc genhtml_legend=1 00:31:55.178 --rc geninfo_all_blocks=1 00:31:55.178 --rc geninfo_unexecuted_blocks=1 00:31:55.178 00:31:55.178 ' 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:55.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.178 --rc genhtml_branch_coverage=1 00:31:55.178 --rc genhtml_function_coverage=1 00:31:55.178 --rc genhtml_legend=1 00:31:55.178 --rc geninfo_all_blocks=1 00:31:55.178 --rc geninfo_unexecuted_blocks=1 00:31:55.178 00:31:55.178 ' 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:55.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.178 --rc genhtml_branch_coverage=1 00:31:55.178 --rc genhtml_function_coverage=1 00:31:55.178 --rc genhtml_legend=1 00:31:55.178 --rc geninfo_all_blocks=1 00:31:55.178 --rc geninfo_unexecuted_blocks=1 00:31:55.178 00:31:55.178 ' 00:31:55.178 17:28:55 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:55.178 17:28:55 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:31:55.178 17:28:55 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:31:55.178 17:28:55 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:31:55.178 17:28:55 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:31:55.178 17:28:55 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:31:55.178 17:28:55 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.178 17:28:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:55.178 ************************************ 00:31:55.178 START TEST raid1_resize_data_offset_test 00:31:55.178 ************************************ 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60261 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:55.178 Process raid pid: 60261 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60261' 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60261 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60261 ']' 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.178 17:28:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.438 [2024-11-26 17:28:55.945464] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:31:55.438 [2024-11-26 17:28:55.945712] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.438 [2024-11-26 17:28:56.123955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.697 [2024-11-26 17:28:56.254577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.956 [2024-11-26 17:28:56.500111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:55.956 [2024-11-26 17:28:56.500228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:56.214 17:28:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.214 17:28:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:31:56.214 17:28:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:31:56.214 17:28:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.214 17:28:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.214 malloc0 00:31:56.214 17:28:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.214 17:28:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:31:56.214 17:28:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.214 17:28:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.471 malloc1 00:31:56.471 17:28:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.471 17:28:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:31:56.471 17:28:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.471 17:28:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.471 null0 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.472 [2024-11-26 17:28:57.008798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:31:56.472 [2024-11-26 17:28:57.010867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:56.472 [2024-11-26 17:28:57.010929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:31:56.472 [2024-11-26 17:28:57.011111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:56.472 [2024-11-26 17:28:57.011129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:31:56.472 [2024-11-26 17:28:57.011425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:56.472 [2024-11-26 17:28:57.011636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:56.472 [2024-11-26 17:28:57.011706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:31:56.472 [2024-11-26 17:28:57.011891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.472 [2024-11-26 17:28:57.044710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.472 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.117 malloc2 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.117 [2024-11-26 17:28:57.689707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:57.117 [2024-11-26 17:28:57.712452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.117 [2024-11-26 17:28:57.714561] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60261 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60261 ']' 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60261 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60261 00:31:57.117 killing process with pid 60261 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60261' 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60261 00:31:57.117 17:28:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60261 00:31:57.117 [2024-11-26 17:28:57.768960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:57.117 [2024-11-26 17:28:57.770213] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:31:57.117 [2024-11-26 17:28:57.770280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:57.117 [2024-11-26 17:28:57.770300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:31:57.377 [2024-11-26 17:28:57.813548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:57.377 [2024-11-26 17:28:57.813945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:57.377 [2024-11-26 17:28:57.813967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:31:59.290 [2024-11-26 17:28:59.866139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:00.664 17:29:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:32:00.664 00:32:00.664 real 0m5.273s 00:32:00.664 user 0m5.156s 00:32:00.664 sys 0m0.512s 00:32:00.664 17:29:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.664 17:29:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.664 ************************************ 00:32:00.664 END TEST raid1_resize_data_offset_test 00:32:00.664 ************************************ 00:32:00.664 17:29:01 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:32:00.664 17:29:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:00.664 17:29:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.664 17:29:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:00.664 ************************************ 00:32:00.664 START TEST raid0_resize_superblock_test 00:32:00.664 ************************************ 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60350 00:32:00.664 Process raid pid: 60350 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60350' 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60350 00:32:00.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60350 ']' 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.664 17:29:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.664 [2024-11-26 17:29:01.281224] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:00.664 [2024-11-26 17:29:01.281447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.922 [2024-11-26 17:29:01.457771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.922 [2024-11-26 17:29:01.589507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.181 [2024-11-26 17:29:01.816001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:01.181 [2024-11-26 17:29:01.816114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:01.789 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.789 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:32:01.789 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:32:01.789 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.789 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.359 malloc0 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.359 [2024-11-26 17:29:02.783951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:32:02.359 [2024-11-26 17:29:02.784087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:02.359 [2024-11-26 17:29:02.784152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:02.359 [2024-11-26 17:29:02.784207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:02.359 [2024-11-26 17:29:02.786700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:02.359 [2024-11-26 17:29:02.786790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:32:02.359 pt0 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.359 4ec69bb5-9eee-4cf6-a476-0f72abfee0c2 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.359 3160393f-c452-4ee4-a745-e900bb996fca 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.359 ef8a09fb-b7de-4771-9419-57ccc0ae8d21 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.359 [2024-11-26 17:29:02.903315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3160393f-c452-4ee4-a745-e900bb996fca is claimed 00:32:02.359 [2024-11-26 17:29:02.903432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ef8a09fb-b7de-4771-9419-57ccc0ae8d21 is claimed 00:32:02.359 [2024-11-26 17:29:02.903618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:02.359 [2024-11-26 17:29:02.903640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:32:02.359 [2024-11-26 17:29:02.903994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:02.359 [2024-11-26 17:29:02.904316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:02.359 [2024-11-26 17:29:02.904338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:32:02.359 [2024-11-26 17:29:02.904588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.359 17:29:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.359 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:32:02.359 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:32:02.359 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:32:02.359 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.359 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.359 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:32:02.359 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:32:02.359 [2024-11-26 17:29:03.015431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:02.359 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.619 [2024-11-26 17:29:03.063297] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:32:02.619 [2024-11-26 17:29:03.063385] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3160393f-c452-4ee4-a745-e900bb996fca' was resized: old size 131072, new size 204800 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.619 [2024-11-26 17:29:03.071180] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:32:02.619 [2024-11-26 17:29:03.071261] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ef8a09fb-b7de-4771-9419-57ccc0ae8d21' was resized: old size 131072, new size 204800 00:32:02.619 [2024-11-26 17:29:03.071330] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.619 [2024-11-26 17:29:03.147183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.619 [2024-11-26 17:29:03.198869] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:32:02.619 [2024-11-26 17:29:03.199015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:32:02.619 [2024-11-26 17:29:03.199040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:02.619 [2024-11-26 17:29:03.199058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:32:02.619 [2024-11-26 17:29:03.199207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:02.619 [2024-11-26 17:29:03.199251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:02.619 [2024-11-26 17:29:03.199266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.619 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.619 [2024-11-26 17:29:03.206739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:32:02.619 [2024-11-26 17:29:03.206819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:02.619 [2024-11-26 17:29:03.206841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:32:02.619 [2024-11-26 17:29:03.206854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:02.619 [2024-11-26 17:29:03.209411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:02.619 [2024-11-26 17:29:03.209459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:32:02.619 [2024-11-26 17:29:03.211584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3160393f-c452-4ee4-a745-e900bb996fca 00:32:02.619 [2024-11-26 17:29:03.211665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3160393f-c452-4ee4-a745-e900bb996fca is claimed 00:32:02.619 [2024-11-26 17:29:03.211813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ef8a09fb-b7de-4771-9419-57ccc0ae8d21 00:32:02.619 [2024-11-26 17:29:03.211834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ef8a09fb-b7de-4771-9419-57ccc0ae8d21 is claimed 00:32:02.619 [2024-11-26 17:29:03.212076] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ef8a09fb-b7de-4771-9419-57ccc0ae8d21 (2) smaller than existing raid bdev Raid (3) 00:32:02.619 [2024-11-26 17:29:03.212131] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 3160393f-c452-4ee4-a745-e900bb996fca: File exists 00:32:02.619 [2024-11-26 17:29:03.212196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:02.619 pt0 00:32:02.620 [2024-11-26 17:29:03.212218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.620 [2024-11-26 17:29:03.212642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.620 [2024-11-26 17:29:03.212870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:02.620 [2024-11-26 17:29:03.212884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:32:02.620 [2024-11-26 17:29:03.213119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:32:02.620 [2024-11-26 17:29:03.227953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60350 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60350 ']' 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60350 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60350 00:32:02.620 killing process with pid 60350 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60350' 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60350 00:32:02.620 17:29:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60350 00:32:02.620 [2024-11-26 17:29:03.291572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:02.620 [2024-11-26 17:29:03.291675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:02.620 [2024-11-26 17:29:03.291730] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:02.620 [2024-11-26 17:29:03.291746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:32:04.527 [2024-11-26 17:29:04.948781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:05.908 17:29:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:32:05.908 00:32:05.908 real 0m5.101s 00:32:05.908 user 0m5.274s 00:32:05.908 sys 0m0.582s 00:32:05.908 ************************************ 00:32:05.908 END TEST raid0_resize_superblock_test 00:32:05.908 ************************************ 00:32:05.908 17:29:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.909 17:29:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.909 17:29:06 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:32:05.909 17:29:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:05.909 17:29:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.909 17:29:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:05.909 ************************************ 00:32:05.909 START TEST raid1_resize_superblock_test 00:32:05.909 ************************************ 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60454 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60454' 00:32:05.909 Process raid pid: 60454 00:32:05.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60454 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60454 ']' 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:05.909 17:29:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.909 [2024-11-26 17:29:06.435948] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:05.909 [2024-11-26 17:29:06.436102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.166 [2024-11-26 17:29:06.601646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.166 [2024-11-26 17:29:06.740661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.422 [2024-11-26 17:29:06.990984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:06.422 [2024-11-26 17:29:06.991032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:06.679 17:29:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.679 17:29:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:32:06.679 17:29:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:32:06.679 17:29:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.679 17:29:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.609 malloc0 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.609 [2024-11-26 17:29:08.065177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:32:07.609 [2024-11-26 17:29:08.065250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:07.609 [2024-11-26 17:29:08.065281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:07.609 [2024-11-26 17:29:08.065315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:07.609 [2024-11-26 17:29:08.067913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:07.609 [2024-11-26 17:29:08.068036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:32:07.609 pt0 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.609 c981dc43-2e30-4cad-849e-7887ebcfcbda 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.609 4460b21c-a996-4af2-a9a7-ea32d35d15b1 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.609 c76542a5-15bd-4cb4-b2e0-bb0313214526 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.609 [2024-11-26 17:29:08.182986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4460b21c-a996-4af2-a9a7-ea32d35d15b1 is claimed 00:32:07.609 [2024-11-26 17:29:08.183116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c76542a5-15bd-4cb4-b2e0-bb0313214526 is claimed 00:32:07.609 [2024-11-26 17:29:08.183287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:07.609 [2024-11-26 17:29:08.183305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:32:07.609 [2024-11-26 17:29:08.183674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:07.609 [2024-11-26 17:29:08.184000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:07.609 [2024-11-26 17:29:08.184024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:32:07.609 [2024-11-26 17:29:08.184257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.609 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.610 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:32:07.610 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.610 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:32:07.610 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:32:07.610 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:32:07.610 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.610 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.610 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:32:07.610 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:32:07.610 [2024-11-26 17:29:08.287108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:07.610 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.868 [2024-11-26 17:29:08.339008] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:32:07.868 [2024-11-26 17:29:08.339055] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4460b21c-a996-4af2-a9a7-ea32d35d15b1' was resized: old size 131072, new size 204800 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.868 [2024-11-26 17:29:08.350951] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:32:07.868 [2024-11-26 17:29:08.350996] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c76542a5-15bd-4cb4-b2e0-bb0313214526' was resized: old size 131072, new size 204800 00:32:07.868 [2024-11-26 17:29:08.351043] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.868 [2024-11-26 17:29:08.466804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.868 [2024-11-26 17:29:08.494966] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:32:07.868 [2024-11-26 17:29:08.495323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:32:07.868 [2024-11-26 17:29:08.495457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:32:07.868 [2024-11-26 17:29:08.496112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:07.868 [2024-11-26 17:29:08.497005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:07.868 [2024-11-26 17:29:08.497627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:07.868 [2024-11-26 17:29:08.497709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.868 [2024-11-26 17:29:08.506488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:32:07.868 [2024-11-26 17:29:08.506620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:07.868 [2024-11-26 17:29:08.506663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:32:07.868 [2024-11-26 17:29:08.506695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:07.868 [2024-11-26 17:29:08.511570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:07.868 [2024-11-26 17:29:08.511730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:32:07.868 pt0 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.868 [2024-11-26 17:29:08.514793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4460b21c-a996-4af2-a9a7-ea32d35d15b1 00:32:07.868 [2024-11-26 17:29:08.515006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4460b21c-a996-4af2-a9a7-ea32d35d15b1 is claimed 00:32:07.868 [2024-11-26 17:29:08.515257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c76542a5-15bd-4cb4-b2e0-bb0313214526 00:32:07.868 [2024-11-26 17:29:08.515297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c76542a5-15bd-4cb4-b2e0-bb0313214526 is claimed 00:32:07.868 [2024-11-26 17:29:08.515499] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c76542a5-15bd-4cb4-b2e0-bb0313214526 (2) smaller than existing raid bdev Raid (3) 00:32:07.868 [2024-11-26 17:29:08.515589] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 4460b21c-a996-4af2-a9a7-ea32d35d15b1: File exists 00:32:07.868 [2024-11-26 17:29:08.515650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:07.868 [2024-11-26 17:29:08.515670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:07.868 [2024-11-26 17:29:08.516118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:32:07.868 [2024-11-26 17:29:08.516413] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:07.868 [2024-11-26 17:29:08.516431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:32:07.868 [2024-11-26 17:29:08.516964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:32:07.868 [2024-11-26 17:29:08.528834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:07.868 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60454 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60454 ']' 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60454 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60454 00:32:08.126 killing process with pid 60454 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60454' 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60454 00:32:08.126 [2024-11-26 17:29:08.595547] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:08.126 17:29:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60454 00:32:08.126 [2024-11-26 17:29:08.595653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:08.126 [2024-11-26 17:29:08.595713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:08.126 [2024-11-26 17:29:08.595724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:32:10.065 [2024-11-26 17:29:10.209778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:11.023 ************************************ 00:32:11.023 END TEST raid1_resize_superblock_test 00:32:11.023 ************************************ 00:32:11.023 17:29:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:32:11.023 00:32:11.023 real 0m5.164s 00:32:11.023 user 0m5.261s 00:32:11.023 sys 0m0.700s 00:32:11.023 17:29:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:11.023 17:29:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 17:29:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:32:11.023 17:29:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:32:11.023 17:29:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:32:11.023 17:29:11 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:32:11.023 17:29:11 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:32:11.023 17:29:11 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:32:11.023 17:29:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:11.023 17:29:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:11.023 17:29:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 ************************************ 00:32:11.023 START TEST raid_function_test_raid0 00:32:11.023 ************************************ 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:32:11.023 Process raid pid: 60562 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60562 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60562' 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60562 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60562 ']' 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.023 17:29:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:32:11.023 [2024-11-26 17:29:11.690107] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:11.023 [2024-11-26 17:29:11.690237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.283 [2024-11-26 17:29:11.866272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.543 [2024-11-26 17:29:11.989660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.543 [2024-11-26 17:29:12.217301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:11.543 [2024-11-26 17:29:12.217352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:32:12.113 Base_1 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:32:12.113 Base_2 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:32:12.113 [2024-11-26 17:29:12.688107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:32:12.113 [2024-11-26 17:29:12.689989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:32:12.113 [2024-11-26 17:29:12.690057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:12.113 [2024-11-26 17:29:12.690069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:12.113 [2024-11-26 17:29:12.690334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:12.113 [2024-11-26 17:29:12.690484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:12.113 [2024-11-26 17:29:12.690493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:32:12.113 [2024-11-26 17:29:12.690696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:12.113 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:32:12.374 [2024-11-26 17:29:12.931800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:12.374 /dev/nbd0 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:12.374 1+0 records in 00:32:12.374 1+0 records out 00:32:12.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521009 s, 7.9 MB/s 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:32:12.374 17:29:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:32:12.374 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:12.634 { 00:32:12.634 "nbd_device": "/dev/nbd0", 00:32:12.634 "bdev_name": "raid" 00:32:12.634 } 00:32:12.634 ]' 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:12.634 { 00:32:12.634 "nbd_device": "/dev/nbd0", 00:32:12.634 "bdev_name": "raid" 00:32:12.634 } 00:32:12.634 ]' 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:32:12.634 4096+0 records in 00:32:12.634 4096+0 records out 00:32:12.634 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0326901 s, 64.2 MB/s 00:32:12.634 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:32:12.894 4096+0 records in 00:32:12.894 4096+0 records out 00:32:12.894 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.216526 s, 9.7 MB/s 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:32:12.894 128+0 records in 00:32:12.894 128+0 records out 00:32:12.894 65536 bytes (66 kB, 64 KiB) copied, 0.00113709 s, 57.6 MB/s 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:32:12.894 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:32:13.154 2035+0 records in 00:32:13.154 2035+0 records out 00:32:13.154 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0154832 s, 67.3 MB/s 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:32:13.154 456+0 records in 00:32:13.154 456+0 records out 00:32:13.154 233472 bytes (233 kB, 228 KiB) copied, 0.00382721 s, 61.0 MB/s 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:32:13.154 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:13.155 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:13.155 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:13.155 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:13.155 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:32:13.155 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:13.155 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:13.414 [2024-11-26 17:29:13.878570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:32:13.414 17:29:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60562 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60562 ']' 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60562 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60562 00:32:13.673 killing process with pid 60562 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60562' 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60562 00:32:13.673 17:29:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60562 00:32:13.673 [2024-11-26 17:29:14.223742] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:13.673 [2024-11-26 17:29:14.223878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:13.673 [2024-11-26 17:29:14.223971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:13.673 [2024-11-26 17:29:14.223993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:32:13.930 [2024-11-26 17:29:14.452062] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:15.310 17:29:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:32:15.310 00:32:15.310 real 0m4.075s 00:32:15.310 user 0m4.788s 00:32:15.310 sys 0m0.925s 00:32:15.310 17:29:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.310 ************************************ 00:32:15.310 END TEST raid_function_test_raid0 00:32:15.310 ************************************ 00:32:15.310 17:29:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:32:15.310 17:29:15 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:32:15.310 17:29:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:15.310 17:29:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.310 17:29:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:15.310 ************************************ 00:32:15.310 START TEST raid_function_test_concat 00:32:15.310 ************************************ 00:32:15.310 17:29:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:32:15.310 17:29:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:32:15.310 17:29:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:32:15.310 17:29:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:32:15.310 Process raid pid: 60691 00:32:15.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.311 17:29:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60691 00:32:15.311 17:29:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:15.311 17:29:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60691' 00:32:15.311 17:29:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60691 00:32:15.311 17:29:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60691 ']' 00:32:15.311 17:29:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.311 17:29:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.311 17:29:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.311 17:29:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.311 17:29:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:32:15.311 [2024-11-26 17:29:15.839167] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:15.311 [2024-11-26 17:29:15.839306] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.569 [2024-11-26 17:29:16.017842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.569 [2024-11-26 17:29:16.144834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.828 [2024-11-26 17:29:16.363403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:15.828 [2024-11-26 17:29:16.363461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:16.087 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.087 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:32:16.087 17:29:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:32:16.087 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.087 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:32:16.087 Base_1 00:32:16.087 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.087 17:29:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:32:16.087 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.087 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:32:16.349 Base_2 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:32:16.349 [2024-11-26 17:29:16.806484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:32:16.349 [2024-11-26 17:29:16.808346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:32:16.349 [2024-11-26 17:29:16.808468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:16.349 [2024-11-26 17:29:16.808485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:16.349 [2024-11-26 17:29:16.808771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:16.349 [2024-11-26 17:29:16.808922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:16.349 [2024-11-26 17:29:16.808930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:32:16.349 [2024-11-26 17:29:16.809087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:16.349 17:29:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:32:16.608 [2024-11-26 17:29:17.046118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:16.608 /dev/nbd0 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:16.608 1+0 records in 00:32:16.608 1+0 records out 00:32:16.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472194 s, 8.7 MB/s 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:32:16.608 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:16.867 { 00:32:16.867 "nbd_device": "/dev/nbd0", 00:32:16.867 "bdev_name": "raid" 00:32:16.867 } 00:32:16.867 ]' 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:16.867 { 00:32:16.867 "nbd_device": "/dev/nbd0", 00:32:16.867 "bdev_name": "raid" 00:32:16.867 } 00:32:16.867 ]' 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:32:16.867 4096+0 records in 00:32:16.867 4096+0 records out 00:32:16.867 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0213197 s, 98.4 MB/s 00:32:16.867 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:32:17.127 4096+0 records in 00:32:17.127 4096+0 records out 00:32:17.127 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.203762 s, 10.3 MB/s 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:32:17.127 128+0 records in 00:32:17.127 128+0 records out 00:32:17.127 65536 bytes (66 kB, 64 KiB) copied, 0.00111544 s, 58.8 MB/s 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:32:17.127 2035+0 records in 00:32:17.127 2035+0 records out 00:32:17.127 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0133218 s, 78.2 MB/s 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:32:17.127 456+0 records in 00:32:17.127 456+0 records out 00:32:17.127 233472 bytes (233 kB, 228 KiB) copied, 0.00416438 s, 56.1 MB/s 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:17.127 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:17.386 [2024-11-26 17:29:17.970127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:32:17.386 17:29:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:32:17.643 17:29:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60691 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60691 ']' 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60691 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60691 00:32:17.644 killing process with pid 60691 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60691' 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60691 00:32:17.644 17:29:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60691 00:32:17.644 [2024-11-26 17:29:18.302485] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:17.644 [2024-11-26 17:29:18.302619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:17.644 [2024-11-26 17:29:18.302681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:17.644 [2024-11-26 17:29:18.302694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:32:17.902 [2024-11-26 17:29:18.530723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:19.277 17:29:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:32:19.277 00:32:19.277 real 0m3.995s 00:32:19.277 user 0m4.649s 00:32:19.277 sys 0m0.939s 00:32:19.277 17:29:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:19.277 17:29:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:32:19.277 ************************************ 00:32:19.277 END TEST raid_function_test_concat 00:32:19.277 ************************************ 00:32:19.277 17:29:19 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:32:19.277 17:29:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:19.277 17:29:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:19.277 17:29:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:19.277 ************************************ 00:32:19.277 START TEST raid0_resize_test 00:32:19.277 ************************************ 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60814 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60814' 00:32:19.277 Process raid pid: 60814 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60814 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60814 ']' 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:19.277 17:29:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.277 [2024-11-26 17:29:19.907718] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:19.277 [2024-11-26 17:29:19.907947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.534 [2024-11-26 17:29:20.087753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.535 [2024-11-26 17:29:20.221124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.792 [2024-11-26 17:29:20.447242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:19.792 [2024-11-26 17:29:20.447290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.359 Base_1 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.359 Base_2 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.359 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.359 [2024-11-26 17:29:20.788899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:32:20.359 [2024-11-26 17:29:20.790670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:32:20.359 [2024-11-26 17:29:20.790761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:20.359 [2024-11-26 17:29:20.790796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:20.359 [2024-11-26 17:29:20.791058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:32:20.359 [2024-11-26 17:29:20.791211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:20.360 [2024-11-26 17:29:20.791264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:32:20.360 [2024-11-26 17:29:20.791431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.360 [2024-11-26 17:29:20.796862] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:32:20.360 [2024-11-26 17:29:20.796925] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:32:20.360 true 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.360 [2024-11-26 17:29:20.809020] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.360 [2024-11-26 17:29:20.852823] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:32:20.360 [2024-11-26 17:29:20.852855] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:32:20.360 [2024-11-26 17:29:20.852892] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:32:20.360 true 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.360 [2024-11-26 17:29:20.864987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60814 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60814 ']' 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60814 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60814 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60814' 00:32:20.360 killing process with pid 60814 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60814 00:32:20.360 17:29:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60814 00:32:20.360 [2024-11-26 17:29:20.941425] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:20.360 [2024-11-26 17:29:20.941561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:20.360 [2024-11-26 17:29:20.941655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:20.360 [2024-11-26 17:29:20.941700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:32:20.360 [2024-11-26 17:29:20.959928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:21.737 17:29:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:32:21.737 00:32:21.737 real 0m2.347s 00:32:21.737 user 0m2.504s 00:32:21.737 sys 0m0.333s 00:32:21.737 17:29:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.737 ************************************ 00:32:21.737 END TEST raid0_resize_test 00:32:21.737 ************************************ 00:32:21.737 17:29:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.737 17:29:22 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:32:21.737 17:29:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:21.737 17:29:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.737 17:29:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:21.737 ************************************ 00:32:21.737 START TEST raid1_resize_test 00:32:21.737 ************************************ 00:32:21.737 17:29:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:32:21.737 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:32:21.737 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:32:21.737 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60875 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60875' 00:32:21.738 Process raid pid: 60875 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60875 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60875 ']' 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.738 17:29:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.738 [2024-11-26 17:29:22.316340] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:21.738 [2024-11-26 17:29:22.316465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.997 [2024-11-26 17:29:22.491768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.997 [2024-11-26 17:29:22.613315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.257 [2024-11-26 17:29:22.831125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:22.257 [2024-11-26 17:29:22.831180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:22.515 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.515 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:32:22.515 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:32:22.515 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.516 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.516 Base_1 00:32:22.516 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.516 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:32:22.516 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.516 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.776 Base_2 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.776 [2024-11-26 17:29:23.218078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:32:22.776 [2024-11-26 17:29:23.220102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:32:22.776 [2024-11-26 17:29:23.220221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:22.776 [2024-11-26 17:29:23.220269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:32:22.776 [2024-11-26 17:29:23.220594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:32:22.776 [2024-11-26 17:29:23.220783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:22.776 [2024-11-26 17:29:23.220826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:32:22.776 [2024-11-26 17:29:23.221034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.776 [2024-11-26 17:29:23.230034] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:32:22.776 [2024-11-26 17:29:23.230108] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:32:22.776 true 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.776 [2024-11-26 17:29:23.246175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.776 [2024-11-26 17:29:23.289952] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:32:22.776 [2024-11-26 17:29:23.290020] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:32:22.776 [2024-11-26 17:29:23.290083] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:32:22.776 true 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:32:22.776 [2024-11-26 17:29:23.302102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60875 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60875 ']' 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60875 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60875 00:32:22.776 killing process with pid 60875 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60875' 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60875 00:32:22.776 [2024-11-26 17:29:23.384713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:22.776 17:29:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60875 00:32:22.776 [2024-11-26 17:29:23.384815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:22.776 [2024-11-26 17:29:23.385366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:22.776 [2024-11-26 17:29:23.385394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:32:22.776 [2024-11-26 17:29:23.405338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:24.155 17:29:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:32:24.155 00:32:24.155 real 0m2.410s 00:32:24.155 user 0m2.586s 00:32:24.155 sys 0m0.335s 00:32:24.155 17:29:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.155 17:29:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.155 ************************************ 00:32:24.155 END TEST raid1_resize_test 00:32:24.155 ************************************ 00:32:24.155 17:29:24 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:32:24.155 17:29:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:32:24.155 17:29:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:32:24.155 17:29:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:24.155 17:29:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.155 17:29:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:24.155 ************************************ 00:32:24.155 START TEST raid_state_function_test 00:32:24.155 ************************************ 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:24.155 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60938 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60938' 00:32:24.156 Process raid pid: 60938 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60938 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60938 ']' 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.156 17:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.156 [2024-11-26 17:29:24.802693] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:24.156 [2024-11-26 17:29:24.802813] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.414 [2024-11-26 17:29:24.981174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.674 [2024-11-26 17:29:25.109632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.674 [2024-11-26 17:29:25.334973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:24.674 [2024-11-26 17:29:25.335010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.241 [2024-11-26 17:29:25.683145] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:25.241 [2024-11-26 17:29:25.683266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:25.241 [2024-11-26 17:29:25.683303] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:25.241 [2024-11-26 17:29:25.683331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:25.241 "name": "Existed_Raid", 00:32:25.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.241 "strip_size_kb": 64, 00:32:25.241 "state": "configuring", 00:32:25.241 "raid_level": "raid0", 00:32:25.241 "superblock": false, 00:32:25.241 "num_base_bdevs": 2, 00:32:25.241 "num_base_bdevs_discovered": 0, 00:32:25.241 "num_base_bdevs_operational": 2, 00:32:25.241 "base_bdevs_list": [ 00:32:25.241 { 00:32:25.241 "name": "BaseBdev1", 00:32:25.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.241 "is_configured": false, 00:32:25.241 "data_offset": 0, 00:32:25.241 "data_size": 0 00:32:25.241 }, 00:32:25.241 { 00:32:25.241 "name": "BaseBdev2", 00:32:25.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.241 "is_configured": false, 00:32:25.241 "data_offset": 0, 00:32:25.241 "data_size": 0 00:32:25.241 } 00:32:25.241 ] 00:32:25.241 }' 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:25.241 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.502 [2024-11-26 17:29:26.134346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:25.502 [2024-11-26 17:29:26.134388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.502 [2024-11-26 17:29:26.146305] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:25.502 [2024-11-26 17:29:26.146418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:25.502 [2024-11-26 17:29:26.146454] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:25.502 [2024-11-26 17:29:26.146483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.502 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.762 [2024-11-26 17:29:26.194914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:25.762 BaseBdev1 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.762 [ 00:32:25.762 { 00:32:25.762 "name": "BaseBdev1", 00:32:25.762 "aliases": [ 00:32:25.762 "d6b24a7e-88e2-4d02-9539-098dcb5d5c12" 00:32:25.762 ], 00:32:25.762 "product_name": "Malloc disk", 00:32:25.762 "block_size": 512, 00:32:25.762 "num_blocks": 65536, 00:32:25.762 "uuid": "d6b24a7e-88e2-4d02-9539-098dcb5d5c12", 00:32:25.762 "assigned_rate_limits": { 00:32:25.762 "rw_ios_per_sec": 0, 00:32:25.762 "rw_mbytes_per_sec": 0, 00:32:25.762 "r_mbytes_per_sec": 0, 00:32:25.762 "w_mbytes_per_sec": 0 00:32:25.762 }, 00:32:25.762 "claimed": true, 00:32:25.762 "claim_type": "exclusive_write", 00:32:25.762 "zoned": false, 00:32:25.762 "supported_io_types": { 00:32:25.762 "read": true, 00:32:25.762 "write": true, 00:32:25.762 "unmap": true, 00:32:25.762 "flush": true, 00:32:25.762 "reset": true, 00:32:25.762 "nvme_admin": false, 00:32:25.762 "nvme_io": false, 00:32:25.762 "nvme_io_md": false, 00:32:25.762 "write_zeroes": true, 00:32:25.762 "zcopy": true, 00:32:25.762 "get_zone_info": false, 00:32:25.762 "zone_management": false, 00:32:25.762 "zone_append": false, 00:32:25.762 "compare": false, 00:32:25.762 "compare_and_write": false, 00:32:25.762 "abort": true, 00:32:25.762 "seek_hole": false, 00:32:25.762 "seek_data": false, 00:32:25.762 "copy": true, 00:32:25.762 "nvme_iov_md": false 00:32:25.762 }, 00:32:25.762 "memory_domains": [ 00:32:25.762 { 00:32:25.762 "dma_device_id": "system", 00:32:25.762 "dma_device_type": 1 00:32:25.762 }, 00:32:25.762 { 00:32:25.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.762 "dma_device_type": 2 00:32:25.762 } 00:32:25.762 ], 00:32:25.762 "driver_specific": {} 00:32:25.762 } 00:32:25.762 ] 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:25.762 "name": "Existed_Raid", 00:32:25.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.762 "strip_size_kb": 64, 00:32:25.762 "state": "configuring", 00:32:25.762 "raid_level": "raid0", 00:32:25.762 "superblock": false, 00:32:25.762 "num_base_bdevs": 2, 00:32:25.762 "num_base_bdevs_discovered": 1, 00:32:25.762 "num_base_bdevs_operational": 2, 00:32:25.762 "base_bdevs_list": [ 00:32:25.762 { 00:32:25.762 "name": "BaseBdev1", 00:32:25.762 "uuid": "d6b24a7e-88e2-4d02-9539-098dcb5d5c12", 00:32:25.762 "is_configured": true, 00:32:25.762 "data_offset": 0, 00:32:25.762 "data_size": 65536 00:32:25.762 }, 00:32:25.762 { 00:32:25.762 "name": "BaseBdev2", 00:32:25.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.762 "is_configured": false, 00:32:25.762 "data_offset": 0, 00:32:25.762 "data_size": 0 00:32:25.762 } 00:32:25.762 ] 00:32:25.762 }' 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:25.762 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.023 [2024-11-26 17:29:26.646193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:26.023 [2024-11-26 17:29:26.646264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.023 [2024-11-26 17:29:26.658213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:26.023 [2024-11-26 17:29:26.660249] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:26.023 [2024-11-26 17:29:26.660347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:26.023 "name": "Existed_Raid", 00:32:26.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.023 "strip_size_kb": 64, 00:32:26.023 "state": "configuring", 00:32:26.023 "raid_level": "raid0", 00:32:26.023 "superblock": false, 00:32:26.023 "num_base_bdevs": 2, 00:32:26.023 "num_base_bdevs_discovered": 1, 00:32:26.023 "num_base_bdevs_operational": 2, 00:32:26.023 "base_bdevs_list": [ 00:32:26.023 { 00:32:26.023 "name": "BaseBdev1", 00:32:26.023 "uuid": "d6b24a7e-88e2-4d02-9539-098dcb5d5c12", 00:32:26.023 "is_configured": true, 00:32:26.023 "data_offset": 0, 00:32:26.023 "data_size": 65536 00:32:26.023 }, 00:32:26.023 { 00:32:26.023 "name": "BaseBdev2", 00:32:26.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.023 "is_configured": false, 00:32:26.023 "data_offset": 0, 00:32:26.023 "data_size": 0 00:32:26.023 } 00:32:26.023 ] 00:32:26.023 }' 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:26.023 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.593 [2024-11-26 17:29:27.135639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:26.593 [2024-11-26 17:29:27.135792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:26.593 [2024-11-26 17:29:27.135823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:26.593 [2024-11-26 17:29:27.136135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:26.593 [2024-11-26 17:29:27.136369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:26.593 [2024-11-26 17:29:27.136418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:26.593 [2024-11-26 17:29:27.136767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:26.593 BaseBdev2 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.593 [ 00:32:26.593 { 00:32:26.593 "name": "BaseBdev2", 00:32:26.593 "aliases": [ 00:32:26.593 "23242492-8bc1-46ff-962f-28d4043ea205" 00:32:26.593 ], 00:32:26.593 "product_name": "Malloc disk", 00:32:26.593 "block_size": 512, 00:32:26.593 "num_blocks": 65536, 00:32:26.593 "uuid": "23242492-8bc1-46ff-962f-28d4043ea205", 00:32:26.593 "assigned_rate_limits": { 00:32:26.593 "rw_ios_per_sec": 0, 00:32:26.593 "rw_mbytes_per_sec": 0, 00:32:26.593 "r_mbytes_per_sec": 0, 00:32:26.593 "w_mbytes_per_sec": 0 00:32:26.593 }, 00:32:26.593 "claimed": true, 00:32:26.593 "claim_type": "exclusive_write", 00:32:26.593 "zoned": false, 00:32:26.593 "supported_io_types": { 00:32:26.593 "read": true, 00:32:26.593 "write": true, 00:32:26.593 "unmap": true, 00:32:26.593 "flush": true, 00:32:26.593 "reset": true, 00:32:26.593 "nvme_admin": false, 00:32:26.593 "nvme_io": false, 00:32:26.593 "nvme_io_md": false, 00:32:26.593 "write_zeroes": true, 00:32:26.593 "zcopy": true, 00:32:26.593 "get_zone_info": false, 00:32:26.593 "zone_management": false, 00:32:26.593 "zone_append": false, 00:32:26.593 "compare": false, 00:32:26.593 "compare_and_write": false, 00:32:26.593 "abort": true, 00:32:26.593 "seek_hole": false, 00:32:26.593 "seek_data": false, 00:32:26.593 "copy": true, 00:32:26.593 "nvme_iov_md": false 00:32:26.593 }, 00:32:26.593 "memory_domains": [ 00:32:26.593 { 00:32:26.593 "dma_device_id": "system", 00:32:26.593 "dma_device_type": 1 00:32:26.593 }, 00:32:26.593 { 00:32:26.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:26.593 "dma_device_type": 2 00:32:26.593 } 00:32:26.593 ], 00:32:26.593 "driver_specific": {} 00:32:26.593 } 00:32:26.593 ] 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:26.593 "name": "Existed_Raid", 00:32:26.593 "uuid": "6881f4fc-5bfe-411c-b6f9-ccb218cae299", 00:32:26.593 "strip_size_kb": 64, 00:32:26.593 "state": "online", 00:32:26.593 "raid_level": "raid0", 00:32:26.593 "superblock": false, 00:32:26.593 "num_base_bdevs": 2, 00:32:26.593 "num_base_bdevs_discovered": 2, 00:32:26.593 "num_base_bdevs_operational": 2, 00:32:26.593 "base_bdevs_list": [ 00:32:26.593 { 00:32:26.593 "name": "BaseBdev1", 00:32:26.593 "uuid": "d6b24a7e-88e2-4d02-9539-098dcb5d5c12", 00:32:26.593 "is_configured": true, 00:32:26.593 "data_offset": 0, 00:32:26.593 "data_size": 65536 00:32:26.593 }, 00:32:26.593 { 00:32:26.593 "name": "BaseBdev2", 00:32:26.593 "uuid": "23242492-8bc1-46ff-962f-28d4043ea205", 00:32:26.593 "is_configured": true, 00:32:26.593 "data_offset": 0, 00:32:26.593 "data_size": 65536 00:32:26.593 } 00:32:26.593 ] 00:32:26.593 }' 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:26.593 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:27.164 [2024-11-26 17:29:27.655994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.164 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:27.164 "name": "Existed_Raid", 00:32:27.164 "aliases": [ 00:32:27.164 "6881f4fc-5bfe-411c-b6f9-ccb218cae299" 00:32:27.164 ], 00:32:27.164 "product_name": "Raid Volume", 00:32:27.164 "block_size": 512, 00:32:27.164 "num_blocks": 131072, 00:32:27.164 "uuid": "6881f4fc-5bfe-411c-b6f9-ccb218cae299", 00:32:27.164 "assigned_rate_limits": { 00:32:27.164 "rw_ios_per_sec": 0, 00:32:27.164 "rw_mbytes_per_sec": 0, 00:32:27.164 "r_mbytes_per_sec": 0, 00:32:27.164 "w_mbytes_per_sec": 0 00:32:27.164 }, 00:32:27.164 "claimed": false, 00:32:27.164 "zoned": false, 00:32:27.164 "supported_io_types": { 00:32:27.164 "read": true, 00:32:27.164 "write": true, 00:32:27.164 "unmap": true, 00:32:27.164 "flush": true, 00:32:27.164 "reset": true, 00:32:27.164 "nvme_admin": false, 00:32:27.164 "nvme_io": false, 00:32:27.164 "nvme_io_md": false, 00:32:27.164 "write_zeroes": true, 00:32:27.164 "zcopy": false, 00:32:27.164 "get_zone_info": false, 00:32:27.164 "zone_management": false, 00:32:27.164 "zone_append": false, 00:32:27.164 "compare": false, 00:32:27.164 "compare_and_write": false, 00:32:27.164 "abort": false, 00:32:27.164 "seek_hole": false, 00:32:27.164 "seek_data": false, 00:32:27.164 "copy": false, 00:32:27.164 "nvme_iov_md": false 00:32:27.164 }, 00:32:27.164 "memory_domains": [ 00:32:27.164 { 00:32:27.164 "dma_device_id": "system", 00:32:27.164 "dma_device_type": 1 00:32:27.164 }, 00:32:27.164 { 00:32:27.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.164 "dma_device_type": 2 00:32:27.164 }, 00:32:27.164 { 00:32:27.164 "dma_device_id": "system", 00:32:27.164 "dma_device_type": 1 00:32:27.164 }, 00:32:27.164 { 00:32:27.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.164 "dma_device_type": 2 00:32:27.165 } 00:32:27.165 ], 00:32:27.165 "driver_specific": { 00:32:27.165 "raid": { 00:32:27.165 "uuid": "6881f4fc-5bfe-411c-b6f9-ccb218cae299", 00:32:27.165 "strip_size_kb": 64, 00:32:27.165 "state": "online", 00:32:27.165 "raid_level": "raid0", 00:32:27.165 "superblock": false, 00:32:27.165 "num_base_bdevs": 2, 00:32:27.165 "num_base_bdevs_discovered": 2, 00:32:27.165 "num_base_bdevs_operational": 2, 00:32:27.165 "base_bdevs_list": [ 00:32:27.165 { 00:32:27.165 "name": "BaseBdev1", 00:32:27.165 "uuid": "d6b24a7e-88e2-4d02-9539-098dcb5d5c12", 00:32:27.165 "is_configured": true, 00:32:27.165 "data_offset": 0, 00:32:27.165 "data_size": 65536 00:32:27.165 }, 00:32:27.165 { 00:32:27.165 "name": "BaseBdev2", 00:32:27.165 "uuid": "23242492-8bc1-46ff-962f-28d4043ea205", 00:32:27.165 "is_configured": true, 00:32:27.165 "data_offset": 0, 00:32:27.165 "data_size": 65536 00:32:27.165 } 00:32:27.165 ] 00:32:27.165 } 00:32:27.165 } 00:32:27.165 }' 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:27.165 BaseBdev2' 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.165 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.425 [2024-11-26 17:29:27.859704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:27.425 [2024-11-26 17:29:27.859789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:27.425 [2024-11-26 17:29:27.859850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.425 17:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.425 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.425 "name": "Existed_Raid", 00:32:27.425 "uuid": "6881f4fc-5bfe-411c-b6f9-ccb218cae299", 00:32:27.425 "strip_size_kb": 64, 00:32:27.425 "state": "offline", 00:32:27.425 "raid_level": "raid0", 00:32:27.425 "superblock": false, 00:32:27.425 "num_base_bdevs": 2, 00:32:27.425 "num_base_bdevs_discovered": 1, 00:32:27.425 "num_base_bdevs_operational": 1, 00:32:27.425 "base_bdevs_list": [ 00:32:27.425 { 00:32:27.425 "name": null, 00:32:27.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.425 "is_configured": false, 00:32:27.425 "data_offset": 0, 00:32:27.425 "data_size": 65536 00:32:27.425 }, 00:32:27.425 { 00:32:27.425 "name": "BaseBdev2", 00:32:27.425 "uuid": "23242492-8bc1-46ff-962f-28d4043ea205", 00:32:27.425 "is_configured": true, 00:32:27.425 "data_offset": 0, 00:32:27.425 "data_size": 65536 00:32:27.425 } 00:32:27.425 ] 00:32:27.425 }' 00:32:27.425 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.425 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.994 [2024-11-26 17:29:28.449415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:27.994 [2024-11-26 17:29:28.449477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60938 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60938 ']' 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60938 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60938 00:32:27.994 killing process with pid 60938 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60938' 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60938 00:32:27.994 [2024-11-26 17:29:28.650285] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:27.994 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60938 00:32:27.994 [2024-11-26 17:29:28.669293] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:32:29.373 00:32:29.373 real 0m5.167s 00:32:29.373 user 0m7.428s 00:32:29.373 sys 0m0.808s 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.373 ************************************ 00:32:29.373 END TEST raid_state_function_test 00:32:29.373 ************************************ 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.373 17:29:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:32:29.373 17:29:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:29.373 17:29:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.373 17:29:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:29.373 ************************************ 00:32:29.373 START TEST raid_state_function_test_sb 00:32:29.373 ************************************ 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61191 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61191' 00:32:29.373 Process raid pid: 61191 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61191 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61191 ']' 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.373 17:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:29.373 [2024-11-26 17:29:30.037297] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:29.373 [2024-11-26 17:29:30.037621] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.643 [2024-11-26 17:29:30.222235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.934 [2024-11-26 17:29:30.347699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.934 [2024-11-26 17:29:30.563155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:29.934 [2024-11-26 17:29:30.563266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:30.194 17:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.194 17:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:32:30.194 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:30.194 17:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.194 17:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.454 [2024-11-26 17:29:30.895775] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:30.454 [2024-11-26 17:29:30.895915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:30.454 [2024-11-26 17:29:30.895952] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:30.454 [2024-11-26 17:29:30.895981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.454 "name": "Existed_Raid", 00:32:30.454 "uuid": "894e6944-0267-4da7-9cea-d030c4a80121", 00:32:30.454 "strip_size_kb": 64, 00:32:30.454 "state": "configuring", 00:32:30.454 "raid_level": "raid0", 00:32:30.454 "superblock": true, 00:32:30.454 "num_base_bdevs": 2, 00:32:30.454 "num_base_bdevs_discovered": 0, 00:32:30.454 "num_base_bdevs_operational": 2, 00:32:30.454 "base_bdevs_list": [ 00:32:30.454 { 00:32:30.454 "name": "BaseBdev1", 00:32:30.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.454 "is_configured": false, 00:32:30.454 "data_offset": 0, 00:32:30.454 "data_size": 0 00:32:30.454 }, 00:32:30.454 { 00:32:30.454 "name": "BaseBdev2", 00:32:30.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.454 "is_configured": false, 00:32:30.454 "data_offset": 0, 00:32:30.454 "data_size": 0 00:32:30.454 } 00:32:30.454 ] 00:32:30.454 }' 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.454 17:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.714 [2024-11-26 17:29:31.367727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:30.714 [2024-11-26 17:29:31.367854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.714 [2024-11-26 17:29:31.375739] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:30.714 [2024-11-26 17:29:31.375802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:30.714 [2024-11-26 17:29:31.375813] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:30.714 [2024-11-26 17:29:31.375827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.714 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.974 [2024-11-26 17:29:31.421920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:30.974 BaseBdev1 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.974 [ 00:32:30.974 { 00:32:30.974 "name": "BaseBdev1", 00:32:30.974 "aliases": [ 00:32:30.974 "1ec2bc6e-d649-4609-af2c-4c94bbcde011" 00:32:30.974 ], 00:32:30.974 "product_name": "Malloc disk", 00:32:30.974 "block_size": 512, 00:32:30.974 "num_blocks": 65536, 00:32:30.974 "uuid": "1ec2bc6e-d649-4609-af2c-4c94bbcde011", 00:32:30.974 "assigned_rate_limits": { 00:32:30.974 "rw_ios_per_sec": 0, 00:32:30.974 "rw_mbytes_per_sec": 0, 00:32:30.974 "r_mbytes_per_sec": 0, 00:32:30.974 "w_mbytes_per_sec": 0 00:32:30.974 }, 00:32:30.974 "claimed": true, 00:32:30.974 "claim_type": "exclusive_write", 00:32:30.974 "zoned": false, 00:32:30.974 "supported_io_types": { 00:32:30.974 "read": true, 00:32:30.974 "write": true, 00:32:30.974 "unmap": true, 00:32:30.974 "flush": true, 00:32:30.974 "reset": true, 00:32:30.974 "nvme_admin": false, 00:32:30.974 "nvme_io": false, 00:32:30.974 "nvme_io_md": false, 00:32:30.974 "write_zeroes": true, 00:32:30.974 "zcopy": true, 00:32:30.974 "get_zone_info": false, 00:32:30.974 "zone_management": false, 00:32:30.974 "zone_append": false, 00:32:30.974 "compare": false, 00:32:30.974 "compare_and_write": false, 00:32:30.974 "abort": true, 00:32:30.974 "seek_hole": false, 00:32:30.974 "seek_data": false, 00:32:30.974 "copy": true, 00:32:30.974 "nvme_iov_md": false 00:32:30.974 }, 00:32:30.974 "memory_domains": [ 00:32:30.974 { 00:32:30.974 "dma_device_id": "system", 00:32:30.974 "dma_device_type": 1 00:32:30.974 }, 00:32:30.974 { 00:32:30.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.974 "dma_device_type": 2 00:32:30.974 } 00:32:30.974 ], 00:32:30.974 "driver_specific": {} 00:32:30.974 } 00:32:30.974 ] 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.974 "name": "Existed_Raid", 00:32:30.974 "uuid": "06a0dde0-a41f-45cb-9f3a-242e202646d3", 00:32:30.974 "strip_size_kb": 64, 00:32:30.974 "state": "configuring", 00:32:30.974 "raid_level": "raid0", 00:32:30.974 "superblock": true, 00:32:30.974 "num_base_bdevs": 2, 00:32:30.974 "num_base_bdevs_discovered": 1, 00:32:30.974 "num_base_bdevs_operational": 2, 00:32:30.974 "base_bdevs_list": [ 00:32:30.974 { 00:32:30.974 "name": "BaseBdev1", 00:32:30.974 "uuid": "1ec2bc6e-d649-4609-af2c-4c94bbcde011", 00:32:30.974 "is_configured": true, 00:32:30.974 "data_offset": 2048, 00:32:30.974 "data_size": 63488 00:32:30.974 }, 00:32:30.974 { 00:32:30.974 "name": "BaseBdev2", 00:32:30.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.974 "is_configured": false, 00:32:30.974 "data_offset": 0, 00:32:30.974 "data_size": 0 00:32:30.974 } 00:32:30.974 ] 00:32:30.974 }' 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.974 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.234 [2024-11-26 17:29:31.857213] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:31.234 [2024-11-26 17:29:31.857346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.234 [2024-11-26 17:29:31.869243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:31.234 [2024-11-26 17:29:31.871073] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:31.234 [2024-11-26 17:29:31.871123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:31.234 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.235 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.235 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.235 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:31.235 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.495 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:31.495 "name": "Existed_Raid", 00:32:31.495 "uuid": "90de7328-7cc2-4174-b305-33242053626f", 00:32:31.495 "strip_size_kb": 64, 00:32:31.495 "state": "configuring", 00:32:31.495 "raid_level": "raid0", 00:32:31.495 "superblock": true, 00:32:31.495 "num_base_bdevs": 2, 00:32:31.495 "num_base_bdevs_discovered": 1, 00:32:31.495 "num_base_bdevs_operational": 2, 00:32:31.495 "base_bdevs_list": [ 00:32:31.495 { 00:32:31.495 "name": "BaseBdev1", 00:32:31.495 "uuid": "1ec2bc6e-d649-4609-af2c-4c94bbcde011", 00:32:31.495 "is_configured": true, 00:32:31.495 "data_offset": 2048, 00:32:31.495 "data_size": 63488 00:32:31.495 }, 00:32:31.495 { 00:32:31.495 "name": "BaseBdev2", 00:32:31.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.495 "is_configured": false, 00:32:31.495 "data_offset": 0, 00:32:31.495 "data_size": 0 00:32:31.495 } 00:32:31.495 ] 00:32:31.495 }' 00:32:31.495 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:31.495 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.755 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:31.755 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.755 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.755 [2024-11-26 17:29:32.356983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:31.755 [2024-11-26 17:29:32.357344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:31.755 [2024-11-26 17:29:32.357400] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:31.755 [2024-11-26 17:29:32.357716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:31.755 [2024-11-26 17:29:32.357936] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:31.755 BaseBdev2 00:32:31.755 [2024-11-26 17:29:32.357993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:31.755 [2024-11-26 17:29:32.358192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:31.755 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.755 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:31.755 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.756 [ 00:32:31.756 { 00:32:31.756 "name": "BaseBdev2", 00:32:31.756 "aliases": [ 00:32:31.756 "8fa18541-4e35-4cb2-aca7-c874bcea3924" 00:32:31.756 ], 00:32:31.756 "product_name": "Malloc disk", 00:32:31.756 "block_size": 512, 00:32:31.756 "num_blocks": 65536, 00:32:31.756 "uuid": "8fa18541-4e35-4cb2-aca7-c874bcea3924", 00:32:31.756 "assigned_rate_limits": { 00:32:31.756 "rw_ios_per_sec": 0, 00:32:31.756 "rw_mbytes_per_sec": 0, 00:32:31.756 "r_mbytes_per_sec": 0, 00:32:31.756 "w_mbytes_per_sec": 0 00:32:31.756 }, 00:32:31.756 "claimed": true, 00:32:31.756 "claim_type": "exclusive_write", 00:32:31.756 "zoned": false, 00:32:31.756 "supported_io_types": { 00:32:31.756 "read": true, 00:32:31.756 "write": true, 00:32:31.756 "unmap": true, 00:32:31.756 "flush": true, 00:32:31.756 "reset": true, 00:32:31.756 "nvme_admin": false, 00:32:31.756 "nvme_io": false, 00:32:31.756 "nvme_io_md": false, 00:32:31.756 "write_zeroes": true, 00:32:31.756 "zcopy": true, 00:32:31.756 "get_zone_info": false, 00:32:31.756 "zone_management": false, 00:32:31.756 "zone_append": false, 00:32:31.756 "compare": false, 00:32:31.756 "compare_and_write": false, 00:32:31.756 "abort": true, 00:32:31.756 "seek_hole": false, 00:32:31.756 "seek_data": false, 00:32:31.756 "copy": true, 00:32:31.756 "nvme_iov_md": false 00:32:31.756 }, 00:32:31.756 "memory_domains": [ 00:32:31.756 { 00:32:31.756 "dma_device_id": "system", 00:32:31.756 "dma_device_type": 1 00:32:31.756 }, 00:32:31.756 { 00:32:31.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.756 "dma_device_type": 2 00:32:31.756 } 00:32:31.756 ], 00:32:31.756 "driver_specific": {} 00:32:31.756 } 00:32:31.756 ] 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:31.756 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.015 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:32.015 "name": "Existed_Raid", 00:32:32.015 "uuid": "90de7328-7cc2-4174-b305-33242053626f", 00:32:32.015 "strip_size_kb": 64, 00:32:32.015 "state": "online", 00:32:32.015 "raid_level": "raid0", 00:32:32.015 "superblock": true, 00:32:32.015 "num_base_bdevs": 2, 00:32:32.015 "num_base_bdevs_discovered": 2, 00:32:32.015 "num_base_bdevs_operational": 2, 00:32:32.015 "base_bdevs_list": [ 00:32:32.015 { 00:32:32.015 "name": "BaseBdev1", 00:32:32.015 "uuid": "1ec2bc6e-d649-4609-af2c-4c94bbcde011", 00:32:32.015 "is_configured": true, 00:32:32.015 "data_offset": 2048, 00:32:32.015 "data_size": 63488 00:32:32.015 }, 00:32:32.015 { 00:32:32.015 "name": "BaseBdev2", 00:32:32.015 "uuid": "8fa18541-4e35-4cb2-aca7-c874bcea3924", 00:32:32.015 "is_configured": true, 00:32:32.015 "data_offset": 2048, 00:32:32.015 "data_size": 63488 00:32:32.015 } 00:32:32.015 ] 00:32:32.015 }' 00:32:32.015 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:32.015 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.275 [2024-11-26 17:29:32.860567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.275 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:32.275 "name": "Existed_Raid", 00:32:32.275 "aliases": [ 00:32:32.275 "90de7328-7cc2-4174-b305-33242053626f" 00:32:32.275 ], 00:32:32.275 "product_name": "Raid Volume", 00:32:32.275 "block_size": 512, 00:32:32.275 "num_blocks": 126976, 00:32:32.275 "uuid": "90de7328-7cc2-4174-b305-33242053626f", 00:32:32.275 "assigned_rate_limits": { 00:32:32.275 "rw_ios_per_sec": 0, 00:32:32.275 "rw_mbytes_per_sec": 0, 00:32:32.275 "r_mbytes_per_sec": 0, 00:32:32.275 "w_mbytes_per_sec": 0 00:32:32.275 }, 00:32:32.275 "claimed": false, 00:32:32.275 "zoned": false, 00:32:32.275 "supported_io_types": { 00:32:32.275 "read": true, 00:32:32.275 "write": true, 00:32:32.275 "unmap": true, 00:32:32.275 "flush": true, 00:32:32.275 "reset": true, 00:32:32.275 "nvme_admin": false, 00:32:32.275 "nvme_io": false, 00:32:32.275 "nvme_io_md": false, 00:32:32.275 "write_zeroes": true, 00:32:32.275 "zcopy": false, 00:32:32.275 "get_zone_info": false, 00:32:32.275 "zone_management": false, 00:32:32.275 "zone_append": false, 00:32:32.275 "compare": false, 00:32:32.275 "compare_and_write": false, 00:32:32.275 "abort": false, 00:32:32.275 "seek_hole": false, 00:32:32.275 "seek_data": false, 00:32:32.275 "copy": false, 00:32:32.275 "nvme_iov_md": false 00:32:32.275 }, 00:32:32.275 "memory_domains": [ 00:32:32.275 { 00:32:32.275 "dma_device_id": "system", 00:32:32.275 "dma_device_type": 1 00:32:32.275 }, 00:32:32.275 { 00:32:32.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.275 "dma_device_type": 2 00:32:32.275 }, 00:32:32.275 { 00:32:32.275 "dma_device_id": "system", 00:32:32.275 "dma_device_type": 1 00:32:32.275 }, 00:32:32.275 { 00:32:32.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.275 "dma_device_type": 2 00:32:32.275 } 00:32:32.275 ], 00:32:32.275 "driver_specific": { 00:32:32.275 "raid": { 00:32:32.275 "uuid": "90de7328-7cc2-4174-b305-33242053626f", 00:32:32.275 "strip_size_kb": 64, 00:32:32.275 "state": "online", 00:32:32.275 "raid_level": "raid0", 00:32:32.275 "superblock": true, 00:32:32.275 "num_base_bdevs": 2, 00:32:32.275 "num_base_bdevs_discovered": 2, 00:32:32.275 "num_base_bdevs_operational": 2, 00:32:32.275 "base_bdevs_list": [ 00:32:32.275 { 00:32:32.275 "name": "BaseBdev1", 00:32:32.275 "uuid": "1ec2bc6e-d649-4609-af2c-4c94bbcde011", 00:32:32.275 "is_configured": true, 00:32:32.275 "data_offset": 2048, 00:32:32.275 "data_size": 63488 00:32:32.275 }, 00:32:32.275 { 00:32:32.275 "name": "BaseBdev2", 00:32:32.275 "uuid": "8fa18541-4e35-4cb2-aca7-c874bcea3924", 00:32:32.275 "is_configured": true, 00:32:32.275 "data_offset": 2048, 00:32:32.275 "data_size": 63488 00:32:32.276 } 00:32:32.276 ] 00:32:32.276 } 00:32:32.276 } 00:32:32.276 }' 00:32:32.276 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:32.276 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:32.276 BaseBdev2' 00:32:32.276 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:32.535 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:32.535 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:32.535 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:32.535 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.535 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.535 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.535 [2024-11-26 17:29:33.095893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:32.535 [2024-11-26 17:29:33.095994] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:32.535 [2024-11-26 17:29:33.096072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:32:32.535 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:32.536 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:32.795 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.795 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:32.795 "name": "Existed_Raid", 00:32:32.795 "uuid": "90de7328-7cc2-4174-b305-33242053626f", 00:32:32.795 "strip_size_kb": 64, 00:32:32.795 "state": "offline", 00:32:32.795 "raid_level": "raid0", 00:32:32.795 "superblock": true, 00:32:32.795 "num_base_bdevs": 2, 00:32:32.795 "num_base_bdevs_discovered": 1, 00:32:32.795 "num_base_bdevs_operational": 1, 00:32:32.795 "base_bdevs_list": [ 00:32:32.795 { 00:32:32.795 "name": null, 00:32:32.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.795 "is_configured": false, 00:32:32.795 "data_offset": 0, 00:32:32.795 "data_size": 63488 00:32:32.795 }, 00:32:32.795 { 00:32:32.795 "name": "BaseBdev2", 00:32:32.795 "uuid": "8fa18541-4e35-4cb2-aca7-c874bcea3924", 00:32:32.795 "is_configured": true, 00:32:32.795 "data_offset": 2048, 00:32:32.795 "data_size": 63488 00:32:32.795 } 00:32:32.795 ] 00:32:32.795 }' 00:32:32.795 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:32.795 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.055 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.055 [2024-11-26 17:29:33.686044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:33.055 [2024-11-26 17:29:33.686131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61191 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61191 ']' 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61191 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61191 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:33.314 killing process with pid 61191 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61191' 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61191 00:32:33.314 [2024-11-26 17:29:33.884500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:33.314 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61191 00:32:33.314 [2024-11-26 17:29:33.903274] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:34.744 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:34.744 00:32:34.744 real 0m5.192s 00:32:34.744 user 0m7.432s 00:32:34.744 sys 0m0.838s 00:32:34.744 17:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:34.744 17:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:34.744 ************************************ 00:32:34.744 END TEST raid_state_function_test_sb 00:32:34.744 ************************************ 00:32:34.744 17:29:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:32:34.744 17:29:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:34.744 17:29:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.744 17:29:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:34.744 ************************************ 00:32:34.744 START TEST raid_superblock_test 00:32:34.744 ************************************ 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61443 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61443 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61443 ']' 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.745 17:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.745 [2024-11-26 17:29:35.282263] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:34.745 [2024-11-26 17:29:35.282472] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61443 ] 00:32:35.004 [2024-11-26 17:29:35.460173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.004 [2024-11-26 17:29:35.588491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.267 [2024-11-26 17:29:35.806054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:35.267 [2024-11-26 17:29:35.806231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:35.526 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:35.526 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.527 malloc1 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.527 [2024-11-26 17:29:36.208748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:35.527 [2024-11-26 17:29:36.208846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.527 [2024-11-26 17:29:36.208900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:35.527 [2024-11-26 17:29:36.208912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.527 [2024-11-26 17:29:36.211296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.527 [2024-11-26 17:29:36.211345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:35.527 pt1 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.527 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.787 malloc2 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.787 [2024-11-26 17:29:36.265717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:35.787 [2024-11-26 17:29:36.265848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.787 [2024-11-26 17:29:36.265897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:35.787 [2024-11-26 17:29:36.265949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.787 [2024-11-26 17:29:36.268257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.787 [2024-11-26 17:29:36.268365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:35.787 pt2 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.787 [2024-11-26 17:29:36.277780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:35.787 [2024-11-26 17:29:36.279891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:35.787 [2024-11-26 17:29:36.280169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:35.787 [2024-11-26 17:29:36.280238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:35.787 [2024-11-26 17:29:36.280626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:35.787 [2024-11-26 17:29:36.280904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:35.787 [2024-11-26 17:29:36.280986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:35.787 [2024-11-26 17:29:36.281302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:35.787 "name": "raid_bdev1", 00:32:35.787 "uuid": "664df89e-b06d-48e6-9a84-86c4e17bb5d9", 00:32:35.787 "strip_size_kb": 64, 00:32:35.787 "state": "online", 00:32:35.787 "raid_level": "raid0", 00:32:35.787 "superblock": true, 00:32:35.787 "num_base_bdevs": 2, 00:32:35.787 "num_base_bdevs_discovered": 2, 00:32:35.787 "num_base_bdevs_operational": 2, 00:32:35.787 "base_bdevs_list": [ 00:32:35.787 { 00:32:35.787 "name": "pt1", 00:32:35.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:35.787 "is_configured": true, 00:32:35.787 "data_offset": 2048, 00:32:35.787 "data_size": 63488 00:32:35.787 }, 00:32:35.787 { 00:32:35.787 "name": "pt2", 00:32:35.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:35.787 "is_configured": true, 00:32:35.787 "data_offset": 2048, 00:32:35.787 "data_size": 63488 00:32:35.787 } 00:32:35.787 ] 00:32:35.787 }' 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:35.787 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:36.356 [2024-11-26 17:29:36.769216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.356 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:36.356 "name": "raid_bdev1", 00:32:36.356 "aliases": [ 00:32:36.356 "664df89e-b06d-48e6-9a84-86c4e17bb5d9" 00:32:36.356 ], 00:32:36.356 "product_name": "Raid Volume", 00:32:36.356 "block_size": 512, 00:32:36.356 "num_blocks": 126976, 00:32:36.356 "uuid": "664df89e-b06d-48e6-9a84-86c4e17bb5d9", 00:32:36.356 "assigned_rate_limits": { 00:32:36.356 "rw_ios_per_sec": 0, 00:32:36.356 "rw_mbytes_per_sec": 0, 00:32:36.356 "r_mbytes_per_sec": 0, 00:32:36.356 "w_mbytes_per_sec": 0 00:32:36.356 }, 00:32:36.356 "claimed": false, 00:32:36.356 "zoned": false, 00:32:36.356 "supported_io_types": { 00:32:36.356 "read": true, 00:32:36.356 "write": true, 00:32:36.356 "unmap": true, 00:32:36.356 "flush": true, 00:32:36.356 "reset": true, 00:32:36.356 "nvme_admin": false, 00:32:36.356 "nvme_io": false, 00:32:36.356 "nvme_io_md": false, 00:32:36.356 "write_zeroes": true, 00:32:36.356 "zcopy": false, 00:32:36.356 "get_zone_info": false, 00:32:36.356 "zone_management": false, 00:32:36.356 "zone_append": false, 00:32:36.356 "compare": false, 00:32:36.356 "compare_and_write": false, 00:32:36.356 "abort": false, 00:32:36.356 "seek_hole": false, 00:32:36.357 "seek_data": false, 00:32:36.357 "copy": false, 00:32:36.357 "nvme_iov_md": false 00:32:36.357 }, 00:32:36.357 "memory_domains": [ 00:32:36.357 { 00:32:36.357 "dma_device_id": "system", 00:32:36.357 "dma_device_type": 1 00:32:36.357 }, 00:32:36.357 { 00:32:36.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:36.357 "dma_device_type": 2 00:32:36.357 }, 00:32:36.357 { 00:32:36.357 "dma_device_id": "system", 00:32:36.357 "dma_device_type": 1 00:32:36.357 }, 00:32:36.357 { 00:32:36.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:36.357 "dma_device_type": 2 00:32:36.357 } 00:32:36.357 ], 00:32:36.357 "driver_specific": { 00:32:36.357 "raid": { 00:32:36.357 "uuid": "664df89e-b06d-48e6-9a84-86c4e17bb5d9", 00:32:36.357 "strip_size_kb": 64, 00:32:36.357 "state": "online", 00:32:36.357 "raid_level": "raid0", 00:32:36.357 "superblock": true, 00:32:36.357 "num_base_bdevs": 2, 00:32:36.357 "num_base_bdevs_discovered": 2, 00:32:36.357 "num_base_bdevs_operational": 2, 00:32:36.357 "base_bdevs_list": [ 00:32:36.357 { 00:32:36.357 "name": "pt1", 00:32:36.357 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:36.357 "is_configured": true, 00:32:36.357 "data_offset": 2048, 00:32:36.357 "data_size": 63488 00:32:36.357 }, 00:32:36.357 { 00:32:36.357 "name": "pt2", 00:32:36.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:36.357 "is_configured": true, 00:32:36.357 "data_offset": 2048, 00:32:36.357 "data_size": 63488 00:32:36.357 } 00:32:36.357 ] 00:32:36.357 } 00:32:36.357 } 00:32:36.357 }' 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:36.357 pt2' 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.357 17:29:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.357 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:36.357 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:36.357 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:36.357 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:36.357 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.357 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.357 [2024-11-26 17:29:37.012873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:36.357 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=664df89e-b06d-48e6-9a84-86c4e17bb5d9 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 664df89e-b06d-48e6-9a84-86c4e17bb5d9 ']' 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.617 [2024-11-26 17:29:37.056393] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:36.617 [2024-11-26 17:29:37.056470] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:36.617 [2024-11-26 17:29:37.056624] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:36.617 [2024-11-26 17:29:37.056683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:36.617 [2024-11-26 17:29:37.056699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:36.617 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.618 [2024-11-26 17:29:37.200211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:36.618 [2024-11-26 17:29:37.202171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:36.618 [2024-11-26 17:29:37.202245] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:36.618 [2024-11-26 17:29:37.202315] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:36.618 [2024-11-26 17:29:37.202333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:36.618 [2024-11-26 17:29:37.202349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:36.618 request: 00:32:36.618 { 00:32:36.618 "name": "raid_bdev1", 00:32:36.618 "raid_level": "raid0", 00:32:36.618 "base_bdevs": [ 00:32:36.618 "malloc1", 00:32:36.618 "malloc2" 00:32:36.618 ], 00:32:36.618 "strip_size_kb": 64, 00:32:36.618 "superblock": false, 00:32:36.618 "method": "bdev_raid_create", 00:32:36.618 "req_id": 1 00:32:36.618 } 00:32:36.618 Got JSON-RPC error response 00:32:36.618 response: 00:32:36.618 { 00:32:36.618 "code": -17, 00:32:36.618 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:36.618 } 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.618 [2024-11-26 17:29:37.260099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:36.618 [2024-11-26 17:29:37.260267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:36.618 [2024-11-26 17:29:37.260328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:36.618 [2024-11-26 17:29:37.260377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:36.618 [2024-11-26 17:29:37.262849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:36.618 [2024-11-26 17:29:37.262940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:36.618 [2024-11-26 17:29:37.263073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:36.618 [2024-11-26 17:29:37.263175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:36.618 pt1 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.618 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.878 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:36.878 "name": "raid_bdev1", 00:32:36.878 "uuid": "664df89e-b06d-48e6-9a84-86c4e17bb5d9", 00:32:36.878 "strip_size_kb": 64, 00:32:36.878 "state": "configuring", 00:32:36.878 "raid_level": "raid0", 00:32:36.878 "superblock": true, 00:32:36.878 "num_base_bdevs": 2, 00:32:36.878 "num_base_bdevs_discovered": 1, 00:32:36.878 "num_base_bdevs_operational": 2, 00:32:36.878 "base_bdevs_list": [ 00:32:36.878 { 00:32:36.878 "name": "pt1", 00:32:36.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:36.878 "is_configured": true, 00:32:36.878 "data_offset": 2048, 00:32:36.878 "data_size": 63488 00:32:36.878 }, 00:32:36.878 { 00:32:36.878 "name": null, 00:32:36.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:36.878 "is_configured": false, 00:32:36.878 "data_offset": 2048, 00:32:36.878 "data_size": 63488 00:32:36.878 } 00:32:36.878 ] 00:32:36.878 }' 00:32:36.878 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:36.878 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.138 [2024-11-26 17:29:37.639759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:37.138 [2024-11-26 17:29:37.639859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:37.138 [2024-11-26 17:29:37.639888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:37.138 [2024-11-26 17:29:37.639906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:37.138 [2024-11-26 17:29:37.640446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:37.138 [2024-11-26 17:29:37.640474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:37.138 [2024-11-26 17:29:37.640614] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:37.138 [2024-11-26 17:29:37.640656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:37.138 [2024-11-26 17:29:37.640813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:37.138 [2024-11-26 17:29:37.640828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:37.138 [2024-11-26 17:29:37.641123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:37.138 [2024-11-26 17:29:37.641386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:37.138 [2024-11-26 17:29:37.641403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:37.138 [2024-11-26 17:29:37.641615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:37.138 pt2 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:37.138 "name": "raid_bdev1", 00:32:37.138 "uuid": "664df89e-b06d-48e6-9a84-86c4e17bb5d9", 00:32:37.138 "strip_size_kb": 64, 00:32:37.138 "state": "online", 00:32:37.138 "raid_level": "raid0", 00:32:37.138 "superblock": true, 00:32:37.138 "num_base_bdevs": 2, 00:32:37.138 "num_base_bdevs_discovered": 2, 00:32:37.138 "num_base_bdevs_operational": 2, 00:32:37.138 "base_bdevs_list": [ 00:32:37.138 { 00:32:37.138 "name": "pt1", 00:32:37.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:37.138 "is_configured": true, 00:32:37.138 "data_offset": 2048, 00:32:37.138 "data_size": 63488 00:32:37.138 }, 00:32:37.138 { 00:32:37.138 "name": "pt2", 00:32:37.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:37.138 "is_configured": true, 00:32:37.138 "data_offset": 2048, 00:32:37.138 "data_size": 63488 00:32:37.138 } 00:32:37.138 ] 00:32:37.138 }' 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:37.138 17:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.707 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.708 [2024-11-26 17:29:38.127997] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:37.708 "name": "raid_bdev1", 00:32:37.708 "aliases": [ 00:32:37.708 "664df89e-b06d-48e6-9a84-86c4e17bb5d9" 00:32:37.708 ], 00:32:37.708 "product_name": "Raid Volume", 00:32:37.708 "block_size": 512, 00:32:37.708 "num_blocks": 126976, 00:32:37.708 "uuid": "664df89e-b06d-48e6-9a84-86c4e17bb5d9", 00:32:37.708 "assigned_rate_limits": { 00:32:37.708 "rw_ios_per_sec": 0, 00:32:37.708 "rw_mbytes_per_sec": 0, 00:32:37.708 "r_mbytes_per_sec": 0, 00:32:37.708 "w_mbytes_per_sec": 0 00:32:37.708 }, 00:32:37.708 "claimed": false, 00:32:37.708 "zoned": false, 00:32:37.708 "supported_io_types": { 00:32:37.708 "read": true, 00:32:37.708 "write": true, 00:32:37.708 "unmap": true, 00:32:37.708 "flush": true, 00:32:37.708 "reset": true, 00:32:37.708 "nvme_admin": false, 00:32:37.708 "nvme_io": false, 00:32:37.708 "nvme_io_md": false, 00:32:37.708 "write_zeroes": true, 00:32:37.708 "zcopy": false, 00:32:37.708 "get_zone_info": false, 00:32:37.708 "zone_management": false, 00:32:37.708 "zone_append": false, 00:32:37.708 "compare": false, 00:32:37.708 "compare_and_write": false, 00:32:37.708 "abort": false, 00:32:37.708 "seek_hole": false, 00:32:37.708 "seek_data": false, 00:32:37.708 "copy": false, 00:32:37.708 "nvme_iov_md": false 00:32:37.708 }, 00:32:37.708 "memory_domains": [ 00:32:37.708 { 00:32:37.708 "dma_device_id": "system", 00:32:37.708 "dma_device_type": 1 00:32:37.708 }, 00:32:37.708 { 00:32:37.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.708 "dma_device_type": 2 00:32:37.708 }, 00:32:37.708 { 00:32:37.708 "dma_device_id": "system", 00:32:37.708 "dma_device_type": 1 00:32:37.708 }, 00:32:37.708 { 00:32:37.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.708 "dma_device_type": 2 00:32:37.708 } 00:32:37.708 ], 00:32:37.708 "driver_specific": { 00:32:37.708 "raid": { 00:32:37.708 "uuid": "664df89e-b06d-48e6-9a84-86c4e17bb5d9", 00:32:37.708 "strip_size_kb": 64, 00:32:37.708 "state": "online", 00:32:37.708 "raid_level": "raid0", 00:32:37.708 "superblock": true, 00:32:37.708 "num_base_bdevs": 2, 00:32:37.708 "num_base_bdevs_discovered": 2, 00:32:37.708 "num_base_bdevs_operational": 2, 00:32:37.708 "base_bdevs_list": [ 00:32:37.708 { 00:32:37.708 "name": "pt1", 00:32:37.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:37.708 "is_configured": true, 00:32:37.708 "data_offset": 2048, 00:32:37.708 "data_size": 63488 00:32:37.708 }, 00:32:37.708 { 00:32:37.708 "name": "pt2", 00:32:37.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:37.708 "is_configured": true, 00:32:37.708 "data_offset": 2048, 00:32:37.708 "data_size": 63488 00:32:37.708 } 00:32:37.708 ] 00:32:37.708 } 00:32:37.708 } 00:32:37.708 }' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:37.708 pt2' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.708 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.708 [2024-11-26 17:29:38.399982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 664df89e-b06d-48e6-9a84-86c4e17bb5d9 '!=' 664df89e-b06d-48e6-9a84-86c4e17bb5d9 ']' 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61443 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61443 ']' 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61443 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61443 00:32:37.968 killing process with pid 61443 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61443' 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61443 00:32:37.968 [2024-11-26 17:29:38.488431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:37.968 [2024-11-26 17:29:38.488561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:37.968 17:29:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61443 00:32:37.968 [2024-11-26 17:29:38.488625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:37.968 [2024-11-26 17:29:38.488641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:38.237 [2024-11-26 17:29:38.708167] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:39.633 17:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:39.633 00:32:39.633 real 0m4.715s 00:32:39.633 user 0m6.599s 00:32:39.633 sys 0m0.806s 00:32:39.633 ************************************ 00:32:39.633 END TEST raid_superblock_test 00:32:39.633 ************************************ 00:32:39.633 17:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.633 17:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.633 17:29:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:32:39.633 17:29:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:39.633 17:29:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.633 17:29:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:39.633 ************************************ 00:32:39.633 START TEST raid_read_error_test 00:32:39.633 ************************************ 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2qefatrbAe 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61655 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61655 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61655 ']' 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.633 17:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.634 17:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.634 [2024-11-26 17:29:40.073119] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:39.634 [2024-11-26 17:29:40.073252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61655 ] 00:32:39.634 [2024-11-26 17:29:40.229226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.893 [2024-11-26 17:29:40.353535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.893 [2024-11-26 17:29:40.562266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:39.893 [2024-11-26 17:29:40.562340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:40.464 17:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.464 17:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:40.464 17:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:40.464 17:29:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:40.464 17:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.464 17:29:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.464 BaseBdev1_malloc 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.464 true 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.464 [2024-11-26 17:29:41.042929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:40.464 [2024-11-26 17:29:41.042997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:40.464 [2024-11-26 17:29:41.043038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:40.464 [2024-11-26 17:29:41.043062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:40.464 [2024-11-26 17:29:41.045410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:40.464 [2024-11-26 17:29:41.045469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:40.464 BaseBdev1 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.464 BaseBdev2_malloc 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.464 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.464 true 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.465 [2024-11-26 17:29:41.112855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:40.465 [2024-11-26 17:29:41.112923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:40.465 [2024-11-26 17:29:41.112943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:40.465 [2024-11-26 17:29:41.112956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:40.465 [2024-11-26 17:29:41.115072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:40.465 [2024-11-26 17:29:41.115121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:40.465 BaseBdev2 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.465 [2024-11-26 17:29:41.124896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:40.465 [2024-11-26 17:29:41.126961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:40.465 [2024-11-26 17:29:41.127198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:40.465 [2024-11-26 17:29:41.127221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:40.465 [2024-11-26 17:29:41.127530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:40.465 [2024-11-26 17:29:41.127765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:40.465 [2024-11-26 17:29:41.127790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:40.465 [2024-11-26 17:29:41.127986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.465 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.723 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:40.723 "name": "raid_bdev1", 00:32:40.723 "uuid": "90385772-6488-4187-9bf0-2e4c1fa152ad", 00:32:40.723 "strip_size_kb": 64, 00:32:40.723 "state": "online", 00:32:40.723 "raid_level": "raid0", 00:32:40.723 "superblock": true, 00:32:40.723 "num_base_bdevs": 2, 00:32:40.723 "num_base_bdevs_discovered": 2, 00:32:40.723 "num_base_bdevs_operational": 2, 00:32:40.723 "base_bdevs_list": [ 00:32:40.723 { 00:32:40.723 "name": "BaseBdev1", 00:32:40.723 "uuid": "6963c9c3-a72a-5687-8507-70db98c86ede", 00:32:40.723 "is_configured": true, 00:32:40.723 "data_offset": 2048, 00:32:40.723 "data_size": 63488 00:32:40.723 }, 00:32:40.723 { 00:32:40.723 "name": "BaseBdev2", 00:32:40.723 "uuid": "5318a546-0bad-5ff4-8611-f9e51d470f8d", 00:32:40.723 "is_configured": true, 00:32:40.723 "data_offset": 2048, 00:32:40.723 "data_size": 63488 00:32:40.723 } 00:32:40.723 ] 00:32:40.723 }' 00:32:40.723 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:40.723 17:29:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.983 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:40.983 17:29:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:41.244 [2024-11-26 17:29:41.685328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.185 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:42.185 "name": "raid_bdev1", 00:32:42.185 "uuid": "90385772-6488-4187-9bf0-2e4c1fa152ad", 00:32:42.185 "strip_size_kb": 64, 00:32:42.185 "state": "online", 00:32:42.185 "raid_level": "raid0", 00:32:42.185 "superblock": true, 00:32:42.185 "num_base_bdevs": 2, 00:32:42.186 "num_base_bdevs_discovered": 2, 00:32:42.186 "num_base_bdevs_operational": 2, 00:32:42.186 "base_bdevs_list": [ 00:32:42.186 { 00:32:42.186 "name": "BaseBdev1", 00:32:42.186 "uuid": "6963c9c3-a72a-5687-8507-70db98c86ede", 00:32:42.186 "is_configured": true, 00:32:42.186 "data_offset": 2048, 00:32:42.186 "data_size": 63488 00:32:42.186 }, 00:32:42.186 { 00:32:42.186 "name": "BaseBdev2", 00:32:42.186 "uuid": "5318a546-0bad-5ff4-8611-f9e51d470f8d", 00:32:42.186 "is_configured": true, 00:32:42.186 "data_offset": 2048, 00:32:42.186 "data_size": 63488 00:32:42.186 } 00:32:42.186 ] 00:32:42.186 }' 00:32:42.186 17:29:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:42.186 17:29:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.450 [2024-11-26 17:29:43.013965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:42.450 [2024-11-26 17:29:43.014007] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:42.450 [2024-11-26 17:29:43.016998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:42.450 [2024-11-26 17:29:43.017048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:42.450 [2024-11-26 17:29:43.017083] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:42.450 [2024-11-26 17:29:43.017096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:42.450 { 00:32:42.450 "results": [ 00:32:42.450 { 00:32:42.450 "job": "raid_bdev1", 00:32:42.450 "core_mask": "0x1", 00:32:42.450 "workload": "randrw", 00:32:42.450 "percentage": 50, 00:32:42.450 "status": "finished", 00:32:42.450 "queue_depth": 1, 00:32:42.450 "io_size": 131072, 00:32:42.450 "runtime": 1.329142, 00:32:42.450 "iops": 13852.545476706026, 00:32:42.450 "mibps": 1731.5681845882532, 00:32:42.450 "io_failed": 1, 00:32:42.450 "io_timeout": 0, 00:32:42.450 "avg_latency_us": 99.59381099882678, 00:32:42.450 "min_latency_us": 28.05938864628821, 00:32:42.450 "max_latency_us": 1452.380786026201 00:32:42.450 } 00:32:42.450 ], 00:32:42.450 "core_count": 1 00:32:42.450 } 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61655 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61655 ']' 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61655 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61655 00:32:42.450 killing process with pid 61655 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61655' 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61655 00:32:42.450 17:29:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61655 00:32:42.450 [2024-11-26 17:29:43.049625] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:42.709 [2024-11-26 17:29:43.193265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:44.085 17:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2qefatrbAe 00:32:44.085 17:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:44.085 17:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:44.085 17:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:32:44.085 17:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:32:44.085 17:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:44.085 17:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:44.085 ************************************ 00:32:44.085 END TEST raid_read_error_test 00:32:44.085 ************************************ 00:32:44.085 17:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:32:44.085 00:32:44.085 real 0m4.488s 00:32:44.085 user 0m5.406s 00:32:44.085 sys 0m0.543s 00:32:44.085 17:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.085 17:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.085 17:29:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:32:44.085 17:29:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:44.085 17:29:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.085 17:29:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:44.085 ************************************ 00:32:44.085 START TEST raid_write_error_test 00:32:44.085 ************************************ 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KiSMhAnhhB 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61795 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61795 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61795 ']' 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.085 17:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.085 [2024-11-26 17:29:44.640810] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:44.085 [2024-11-26 17:29:44.641005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61795 ] 00:32:44.345 [2024-11-26 17:29:44.822092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.345 [2024-11-26 17:29:44.944749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.606 [2024-11-26 17:29:45.158369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:44.606 [2024-11-26 17:29:45.158572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:44.866 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.867 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:32:44.867 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:44.867 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:44.867 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.867 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.127 BaseBdev1_malloc 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.127 true 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.127 [2024-11-26 17:29:45.603583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:45.127 [2024-11-26 17:29:45.603698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.127 [2024-11-26 17:29:45.603724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:45.127 [2024-11-26 17:29:45.603738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.127 [2024-11-26 17:29:45.605909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.127 [2024-11-26 17:29:45.605957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:45.127 BaseBdev1 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.127 BaseBdev2_malloc 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.127 true 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.127 [2024-11-26 17:29:45.670424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:45.127 [2024-11-26 17:29:45.670491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:45.127 [2024-11-26 17:29:45.670531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:45.127 [2024-11-26 17:29:45.670547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:45.127 [2024-11-26 17:29:45.672893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:45.127 [2024-11-26 17:29:45.673008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:45.127 BaseBdev2 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.127 [2024-11-26 17:29:45.682451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:45.127 [2024-11-26 17:29:45.684505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:45.127 [2024-11-26 17:29:45.684768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:45.127 [2024-11-26 17:29:45.684798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:45.127 [2024-11-26 17:29:45.685079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:45.127 [2024-11-26 17:29:45.685278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:45.127 [2024-11-26 17:29:45.685293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:45.127 [2024-11-26 17:29:45.685488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.127 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.127 "name": "raid_bdev1", 00:32:45.127 "uuid": "aa56d985-2cd4-4465-afb2-c87cfe959073", 00:32:45.127 "strip_size_kb": 64, 00:32:45.127 "state": "online", 00:32:45.127 "raid_level": "raid0", 00:32:45.127 "superblock": true, 00:32:45.127 "num_base_bdevs": 2, 00:32:45.127 "num_base_bdevs_discovered": 2, 00:32:45.127 "num_base_bdevs_operational": 2, 00:32:45.127 "base_bdevs_list": [ 00:32:45.127 { 00:32:45.127 "name": "BaseBdev1", 00:32:45.127 "uuid": "bb729928-ac8c-5f6a-b472-47e2afbc717b", 00:32:45.127 "is_configured": true, 00:32:45.127 "data_offset": 2048, 00:32:45.127 "data_size": 63488 00:32:45.127 }, 00:32:45.127 { 00:32:45.127 "name": "BaseBdev2", 00:32:45.127 "uuid": "a5a0d8f8-f155-54ab-b1c0-1ca90fe9989d", 00:32:45.127 "is_configured": true, 00:32:45.127 "data_offset": 2048, 00:32:45.127 "data_size": 63488 00:32:45.127 } 00:32:45.127 ] 00:32:45.127 }' 00:32:45.128 17:29:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.128 17:29:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.697 17:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:45.697 17:29:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:45.697 [2024-11-26 17:29:46.211025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:46.636 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:32:46.636 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.636 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.636 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:46.637 "name": "raid_bdev1", 00:32:46.637 "uuid": "aa56d985-2cd4-4465-afb2-c87cfe959073", 00:32:46.637 "strip_size_kb": 64, 00:32:46.637 "state": "online", 00:32:46.637 "raid_level": "raid0", 00:32:46.637 "superblock": true, 00:32:46.637 "num_base_bdevs": 2, 00:32:46.637 "num_base_bdevs_discovered": 2, 00:32:46.637 "num_base_bdevs_operational": 2, 00:32:46.637 "base_bdevs_list": [ 00:32:46.637 { 00:32:46.637 "name": "BaseBdev1", 00:32:46.637 "uuid": "bb729928-ac8c-5f6a-b472-47e2afbc717b", 00:32:46.637 "is_configured": true, 00:32:46.637 "data_offset": 2048, 00:32:46.637 "data_size": 63488 00:32:46.637 }, 00:32:46.637 { 00:32:46.637 "name": "BaseBdev2", 00:32:46.637 "uuid": "a5a0d8f8-f155-54ab-b1c0-1ca90fe9989d", 00:32:46.637 "is_configured": true, 00:32:46.637 "data_offset": 2048, 00:32:46.637 "data_size": 63488 00:32:46.637 } 00:32:46.637 ] 00:32:46.637 }' 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:46.637 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.204 [2024-11-26 17:29:47.607576] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:47.204 [2024-11-26 17:29:47.607698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:47.204 [2024-11-26 17:29:47.611025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:47.204 [2024-11-26 17:29:47.611073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:47.204 [2024-11-26 17:29:47.611114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:47.204 [2024-11-26 17:29:47.611129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:47.204 { 00:32:47.204 "results": [ 00:32:47.204 { 00:32:47.204 "job": "raid_bdev1", 00:32:47.204 "core_mask": "0x1", 00:32:47.204 "workload": "randrw", 00:32:47.204 "percentage": 50, 00:32:47.204 "status": "finished", 00:32:47.204 "queue_depth": 1, 00:32:47.204 "io_size": 131072, 00:32:47.204 "runtime": 1.397378, 00:32:47.204 "iops": 14331.125865728529, 00:32:47.204 "mibps": 1791.390733216066, 00:32:47.204 "io_failed": 1, 00:32:47.204 "io_timeout": 0, 00:32:47.204 "avg_latency_us": 96.44282018401796, 00:32:47.204 "min_latency_us": 28.28296943231441, 00:32:47.204 "max_latency_us": 1638.4 00:32:47.204 } 00:32:47.204 ], 00:32:47.204 "core_count": 1 00:32:47.204 } 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61795 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61795 ']' 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61795 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61795 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61795' 00:32:47.204 killing process with pid 61795 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61795 00:32:47.204 [2024-11-26 17:29:47.654781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:47.204 17:29:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61795 00:32:47.204 [2024-11-26 17:29:47.795893] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:48.584 17:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:48.584 17:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KiSMhAnhhB 00:32:48.584 17:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:48.584 17:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:32:48.584 17:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:32:48.584 17:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:48.584 17:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:48.584 17:29:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:32:48.584 00:32:48.584 real 0m4.538s 00:32:48.584 user 0m5.485s 00:32:48.584 sys 0m0.543s 00:32:48.584 17:29:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.584 17:29:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.584 ************************************ 00:32:48.584 END TEST raid_write_error_test 00:32:48.584 ************************************ 00:32:48.584 17:29:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:32:48.584 17:29:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:32:48.584 17:29:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:48.584 17:29:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:48.584 17:29:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:48.584 ************************************ 00:32:48.584 START TEST raid_state_function_test 00:32:48.584 ************************************ 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61944 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61944' 00:32:48.584 Process raid pid: 61944 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61944 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61944 ']' 00:32:48.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:48.584 17:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.584 [2024-11-26 17:29:49.241205] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:48.585 [2024-11-26 17:29:49.241432] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.843 [2024-11-26 17:29:49.421848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.102 [2024-11-26 17:29:49.544208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.102 [2024-11-26 17:29:49.750254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:49.102 [2024-11-26 17:29:49.750304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.672 [2024-11-26 17:29:50.107076] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:49.672 [2024-11-26 17:29:50.107137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:49.672 [2024-11-26 17:29:50.107149] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:49.672 [2024-11-26 17:29:50.107160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.672 "name": "Existed_Raid", 00:32:49.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.672 "strip_size_kb": 64, 00:32:49.672 "state": "configuring", 00:32:49.672 "raid_level": "concat", 00:32:49.672 "superblock": false, 00:32:49.672 "num_base_bdevs": 2, 00:32:49.672 "num_base_bdevs_discovered": 0, 00:32:49.672 "num_base_bdevs_operational": 2, 00:32:49.672 "base_bdevs_list": [ 00:32:49.672 { 00:32:49.672 "name": "BaseBdev1", 00:32:49.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.672 "is_configured": false, 00:32:49.672 "data_offset": 0, 00:32:49.672 "data_size": 0 00:32:49.672 }, 00:32:49.672 { 00:32:49.672 "name": "BaseBdev2", 00:32:49.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.672 "is_configured": false, 00:32:49.672 "data_offset": 0, 00:32:49.672 "data_size": 0 00:32:49.672 } 00:32:49.672 ] 00:32:49.672 }' 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.672 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.931 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:49.931 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.931 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.932 [2024-11-26 17:29:50.534310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:49.932 [2024-11-26 17:29:50.534355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.932 [2024-11-26 17:29:50.542257] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:49.932 [2024-11-26 17:29:50.542304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:49.932 [2024-11-26 17:29:50.542314] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:49.932 [2024-11-26 17:29:50.542343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.932 [2024-11-26 17:29:50.590091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:49.932 BaseBdev1 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.932 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.932 [ 00:32:49.932 { 00:32:49.932 "name": "BaseBdev1", 00:32:49.932 "aliases": [ 00:32:49.932 "299dcc26-83b9-46cf-8bcd-4e98d25fce95" 00:32:49.932 ], 00:32:49.932 "product_name": "Malloc disk", 00:32:49.932 "block_size": 512, 00:32:49.932 "num_blocks": 65536, 00:32:49.932 "uuid": "299dcc26-83b9-46cf-8bcd-4e98d25fce95", 00:32:49.932 "assigned_rate_limits": { 00:32:49.932 "rw_ios_per_sec": 0, 00:32:49.932 "rw_mbytes_per_sec": 0, 00:32:49.932 "r_mbytes_per_sec": 0, 00:32:49.932 "w_mbytes_per_sec": 0 00:32:49.932 }, 00:32:49.932 "claimed": true, 00:32:49.932 "claim_type": "exclusive_write", 00:32:49.932 "zoned": false, 00:32:49.932 "supported_io_types": { 00:32:49.932 "read": true, 00:32:49.932 "write": true, 00:32:49.932 "unmap": true, 00:32:49.932 "flush": true, 00:32:49.932 "reset": true, 00:32:49.932 "nvme_admin": false, 00:32:49.932 "nvme_io": false, 00:32:49.932 "nvme_io_md": false, 00:32:49.932 "write_zeroes": true, 00:32:49.932 "zcopy": true, 00:32:49.932 "get_zone_info": false, 00:32:49.932 "zone_management": false, 00:32:49.932 "zone_append": false, 00:32:49.932 "compare": false, 00:32:50.199 "compare_and_write": false, 00:32:50.199 "abort": true, 00:32:50.199 "seek_hole": false, 00:32:50.199 "seek_data": false, 00:32:50.199 "copy": true, 00:32:50.199 "nvme_iov_md": false 00:32:50.199 }, 00:32:50.199 "memory_domains": [ 00:32:50.199 { 00:32:50.199 "dma_device_id": "system", 00:32:50.199 "dma_device_type": 1 00:32:50.199 }, 00:32:50.200 { 00:32:50.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:50.200 "dma_device_type": 2 00:32:50.200 } 00:32:50.200 ], 00:32:50.200 "driver_specific": {} 00:32:50.200 } 00:32:50.200 ] 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:50.200 "name": "Existed_Raid", 00:32:50.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.200 "strip_size_kb": 64, 00:32:50.200 "state": "configuring", 00:32:50.200 "raid_level": "concat", 00:32:50.200 "superblock": false, 00:32:50.200 "num_base_bdevs": 2, 00:32:50.200 "num_base_bdevs_discovered": 1, 00:32:50.200 "num_base_bdevs_operational": 2, 00:32:50.200 "base_bdevs_list": [ 00:32:50.200 { 00:32:50.200 "name": "BaseBdev1", 00:32:50.200 "uuid": "299dcc26-83b9-46cf-8bcd-4e98d25fce95", 00:32:50.200 "is_configured": true, 00:32:50.200 "data_offset": 0, 00:32:50.200 "data_size": 65536 00:32:50.200 }, 00:32:50.200 { 00:32:50.200 "name": "BaseBdev2", 00:32:50.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.200 "is_configured": false, 00:32:50.200 "data_offset": 0, 00:32:50.200 "data_size": 0 00:32:50.200 } 00:32:50.200 ] 00:32:50.200 }' 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:50.200 17:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.459 [2024-11-26 17:29:51.105341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:50.459 [2024-11-26 17:29:51.105406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.459 [2024-11-26 17:29:51.117434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:50.459 [2024-11-26 17:29:51.119561] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:50.459 [2024-11-26 17:29:51.119689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.459 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.718 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:50.719 "name": "Existed_Raid", 00:32:50.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.719 "strip_size_kb": 64, 00:32:50.719 "state": "configuring", 00:32:50.719 "raid_level": "concat", 00:32:50.719 "superblock": false, 00:32:50.719 "num_base_bdevs": 2, 00:32:50.719 "num_base_bdevs_discovered": 1, 00:32:50.719 "num_base_bdevs_operational": 2, 00:32:50.719 "base_bdevs_list": [ 00:32:50.719 { 00:32:50.719 "name": "BaseBdev1", 00:32:50.719 "uuid": "299dcc26-83b9-46cf-8bcd-4e98d25fce95", 00:32:50.719 "is_configured": true, 00:32:50.719 "data_offset": 0, 00:32:50.719 "data_size": 65536 00:32:50.719 }, 00:32:50.719 { 00:32:50.719 "name": "BaseBdev2", 00:32:50.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.719 "is_configured": false, 00:32:50.719 "data_offset": 0, 00:32:50.719 "data_size": 0 00:32:50.719 } 00:32:50.719 ] 00:32:50.719 }' 00:32:50.719 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:50.719 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.978 [2024-11-26 17:29:51.573046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:50.978 [2024-11-26 17:29:51.573194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:50.978 [2024-11-26 17:29:51.573225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:50.978 [2024-11-26 17:29:51.573575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:50.978 [2024-11-26 17:29:51.573807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:50.978 [2024-11-26 17:29:51.573861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:50.978 [2024-11-26 17:29:51.574184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:50.978 BaseBdev2 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.978 [ 00:32:50.978 { 00:32:50.978 "name": "BaseBdev2", 00:32:50.978 "aliases": [ 00:32:50.978 "eb854ffd-cd25-47b4-8055-a1d5ac282caf" 00:32:50.978 ], 00:32:50.978 "product_name": "Malloc disk", 00:32:50.978 "block_size": 512, 00:32:50.978 "num_blocks": 65536, 00:32:50.978 "uuid": "eb854ffd-cd25-47b4-8055-a1d5ac282caf", 00:32:50.978 "assigned_rate_limits": { 00:32:50.978 "rw_ios_per_sec": 0, 00:32:50.978 "rw_mbytes_per_sec": 0, 00:32:50.978 "r_mbytes_per_sec": 0, 00:32:50.978 "w_mbytes_per_sec": 0 00:32:50.978 }, 00:32:50.978 "claimed": true, 00:32:50.978 "claim_type": "exclusive_write", 00:32:50.978 "zoned": false, 00:32:50.978 "supported_io_types": { 00:32:50.978 "read": true, 00:32:50.978 "write": true, 00:32:50.978 "unmap": true, 00:32:50.978 "flush": true, 00:32:50.978 "reset": true, 00:32:50.978 "nvme_admin": false, 00:32:50.978 "nvme_io": false, 00:32:50.978 "nvme_io_md": false, 00:32:50.978 "write_zeroes": true, 00:32:50.978 "zcopy": true, 00:32:50.978 "get_zone_info": false, 00:32:50.978 "zone_management": false, 00:32:50.978 "zone_append": false, 00:32:50.978 "compare": false, 00:32:50.978 "compare_and_write": false, 00:32:50.978 "abort": true, 00:32:50.978 "seek_hole": false, 00:32:50.978 "seek_data": false, 00:32:50.978 "copy": true, 00:32:50.978 "nvme_iov_md": false 00:32:50.978 }, 00:32:50.978 "memory_domains": [ 00:32:50.978 { 00:32:50.978 "dma_device_id": "system", 00:32:50.978 "dma_device_type": 1 00:32:50.978 }, 00:32:50.978 { 00:32:50.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:50.978 "dma_device_type": 2 00:32:50.978 } 00:32:50.978 ], 00:32:50.978 "driver_specific": {} 00:32:50.978 } 00:32:50.978 ] 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:50.978 "name": "Existed_Raid", 00:32:50.978 "uuid": "7988f6aa-043b-46cc-891b-22faba39d296", 00:32:50.978 "strip_size_kb": 64, 00:32:50.978 "state": "online", 00:32:50.978 "raid_level": "concat", 00:32:50.978 "superblock": false, 00:32:50.978 "num_base_bdevs": 2, 00:32:50.978 "num_base_bdevs_discovered": 2, 00:32:50.978 "num_base_bdevs_operational": 2, 00:32:50.978 "base_bdevs_list": [ 00:32:50.978 { 00:32:50.978 "name": "BaseBdev1", 00:32:50.978 "uuid": "299dcc26-83b9-46cf-8bcd-4e98d25fce95", 00:32:50.978 "is_configured": true, 00:32:50.978 "data_offset": 0, 00:32:50.978 "data_size": 65536 00:32:50.978 }, 00:32:50.978 { 00:32:50.978 "name": "BaseBdev2", 00:32:50.978 "uuid": "eb854ffd-cd25-47b4-8055-a1d5ac282caf", 00:32:50.978 "is_configured": true, 00:32:50.978 "data_offset": 0, 00:32:50.978 "data_size": 65536 00:32:50.978 } 00:32:50.978 ] 00:32:50.978 }' 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:50.978 17:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.548 [2024-11-26 17:29:52.048627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:51.548 "name": "Existed_Raid", 00:32:51.548 "aliases": [ 00:32:51.548 "7988f6aa-043b-46cc-891b-22faba39d296" 00:32:51.548 ], 00:32:51.548 "product_name": "Raid Volume", 00:32:51.548 "block_size": 512, 00:32:51.548 "num_blocks": 131072, 00:32:51.548 "uuid": "7988f6aa-043b-46cc-891b-22faba39d296", 00:32:51.548 "assigned_rate_limits": { 00:32:51.548 "rw_ios_per_sec": 0, 00:32:51.548 "rw_mbytes_per_sec": 0, 00:32:51.548 "r_mbytes_per_sec": 0, 00:32:51.548 "w_mbytes_per_sec": 0 00:32:51.548 }, 00:32:51.548 "claimed": false, 00:32:51.548 "zoned": false, 00:32:51.548 "supported_io_types": { 00:32:51.548 "read": true, 00:32:51.548 "write": true, 00:32:51.548 "unmap": true, 00:32:51.548 "flush": true, 00:32:51.548 "reset": true, 00:32:51.548 "nvme_admin": false, 00:32:51.548 "nvme_io": false, 00:32:51.548 "nvme_io_md": false, 00:32:51.548 "write_zeroes": true, 00:32:51.548 "zcopy": false, 00:32:51.548 "get_zone_info": false, 00:32:51.548 "zone_management": false, 00:32:51.548 "zone_append": false, 00:32:51.548 "compare": false, 00:32:51.548 "compare_and_write": false, 00:32:51.548 "abort": false, 00:32:51.548 "seek_hole": false, 00:32:51.548 "seek_data": false, 00:32:51.548 "copy": false, 00:32:51.548 "nvme_iov_md": false 00:32:51.548 }, 00:32:51.548 "memory_domains": [ 00:32:51.548 { 00:32:51.548 "dma_device_id": "system", 00:32:51.548 "dma_device_type": 1 00:32:51.548 }, 00:32:51.548 { 00:32:51.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:51.548 "dma_device_type": 2 00:32:51.548 }, 00:32:51.548 { 00:32:51.548 "dma_device_id": "system", 00:32:51.548 "dma_device_type": 1 00:32:51.548 }, 00:32:51.548 { 00:32:51.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:51.548 "dma_device_type": 2 00:32:51.548 } 00:32:51.548 ], 00:32:51.548 "driver_specific": { 00:32:51.548 "raid": { 00:32:51.548 "uuid": "7988f6aa-043b-46cc-891b-22faba39d296", 00:32:51.548 "strip_size_kb": 64, 00:32:51.548 "state": "online", 00:32:51.548 "raid_level": "concat", 00:32:51.548 "superblock": false, 00:32:51.548 "num_base_bdevs": 2, 00:32:51.548 "num_base_bdevs_discovered": 2, 00:32:51.548 "num_base_bdevs_operational": 2, 00:32:51.548 "base_bdevs_list": [ 00:32:51.548 { 00:32:51.548 "name": "BaseBdev1", 00:32:51.548 "uuid": "299dcc26-83b9-46cf-8bcd-4e98d25fce95", 00:32:51.548 "is_configured": true, 00:32:51.548 "data_offset": 0, 00:32:51.548 "data_size": 65536 00:32:51.548 }, 00:32:51.548 { 00:32:51.548 "name": "BaseBdev2", 00:32:51.548 "uuid": "eb854ffd-cd25-47b4-8055-a1d5ac282caf", 00:32:51.548 "is_configured": true, 00:32:51.548 "data_offset": 0, 00:32:51.548 "data_size": 65536 00:32:51.548 } 00:32:51.548 ] 00:32:51.548 } 00:32:51.548 } 00:32:51.548 }' 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:51.548 BaseBdev2' 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.548 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.808 [2024-11-26 17:29:52.267966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:51.808 [2024-11-26 17:29:52.268006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:51.808 [2024-11-26 17:29:52.268062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:51.808 "name": "Existed_Raid", 00:32:51.808 "uuid": "7988f6aa-043b-46cc-891b-22faba39d296", 00:32:51.808 "strip_size_kb": 64, 00:32:51.808 "state": "offline", 00:32:51.808 "raid_level": "concat", 00:32:51.808 "superblock": false, 00:32:51.808 "num_base_bdevs": 2, 00:32:51.808 "num_base_bdevs_discovered": 1, 00:32:51.808 "num_base_bdevs_operational": 1, 00:32:51.808 "base_bdevs_list": [ 00:32:51.808 { 00:32:51.808 "name": null, 00:32:51.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:51.808 "is_configured": false, 00:32:51.808 "data_offset": 0, 00:32:51.808 "data_size": 65536 00:32:51.808 }, 00:32:51.808 { 00:32:51.808 "name": "BaseBdev2", 00:32:51.808 "uuid": "eb854ffd-cd25-47b4-8055-a1d5ac282caf", 00:32:51.808 "is_configured": true, 00:32:51.808 "data_offset": 0, 00:32:51.808 "data_size": 65536 00:32:51.808 } 00:32:51.808 ] 00:32:51.808 }' 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:51.808 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.376 [2024-11-26 17:29:52.865802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:52.376 [2024-11-26 17:29:52.865859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.376 17:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61944 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61944 ']' 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61944 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61944 00:32:52.376 killing process with pid 61944 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61944' 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61944 00:32:52.376 [2024-11-26 17:29:53.062903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:52.376 17:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61944 00:32:52.636 [2024-11-26 17:29:53.080229] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:53.573 17:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:32:53.573 00:32:53.573 real 0m5.109s 00:32:53.573 user 0m7.357s 00:32:53.573 sys 0m0.828s 00:32:53.573 17:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.573 ************************************ 00:32:53.573 END TEST raid_state_function_test 00:32:53.573 ************************************ 00:32:53.573 17:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.831 17:29:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:32:53.831 17:29:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:53.831 17:29:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.831 17:29:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:53.831 ************************************ 00:32:53.831 START TEST raid_state_function_test_sb 00:32:53.831 ************************************ 00:32:53.831 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62196 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62196' 00:32:53.832 Process raid pid: 62196 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62196 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62196 ']' 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.832 17:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:53.832 [2024-11-26 17:29:54.402906] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:53.832 [2024-11-26 17:29:54.403123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.092 [2024-11-26 17:29:54.579006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.092 [2024-11-26 17:29:54.703727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.351 [2024-11-26 17:29:54.919289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:54.351 [2024-11-26 17:29:54.919328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:54.614 [2024-11-26 17:29:55.270676] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:54.614 [2024-11-26 17:29:55.270799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:54.614 [2024-11-26 17:29:55.270842] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:54.614 [2024-11-26 17:29:55.270870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:54.614 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:54.615 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:54.615 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:54.615 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:54.615 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:54.615 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:54.615 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.615 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:54.615 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.878 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:54.878 "name": "Existed_Raid", 00:32:54.878 "uuid": "a48535af-6ca3-4ff8-964a-d9840f57eb8c", 00:32:54.878 "strip_size_kb": 64, 00:32:54.878 "state": "configuring", 00:32:54.878 "raid_level": "concat", 00:32:54.878 "superblock": true, 00:32:54.878 "num_base_bdevs": 2, 00:32:54.878 "num_base_bdevs_discovered": 0, 00:32:54.878 "num_base_bdevs_operational": 2, 00:32:54.878 "base_bdevs_list": [ 00:32:54.878 { 00:32:54.878 "name": "BaseBdev1", 00:32:54.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.878 "is_configured": false, 00:32:54.878 "data_offset": 0, 00:32:54.878 "data_size": 0 00:32:54.878 }, 00:32:54.878 { 00:32:54.878 "name": "BaseBdev2", 00:32:54.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.878 "is_configured": false, 00:32:54.878 "data_offset": 0, 00:32:54.879 "data_size": 0 00:32:54.879 } 00:32:54.879 ] 00:32:54.879 }' 00:32:54.879 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:54.879 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.137 [2024-11-26 17:29:55.713842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:55.137 [2024-11-26 17:29:55.713929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.137 [2024-11-26 17:29:55.725828] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:55.137 [2024-11-26 17:29:55.725922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:55.137 [2024-11-26 17:29:55.725957] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:55.137 [2024-11-26 17:29:55.725985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.137 [2024-11-26 17:29:55.775844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:55.137 BaseBdev1 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.137 [ 00:32:55.137 { 00:32:55.137 "name": "BaseBdev1", 00:32:55.137 "aliases": [ 00:32:55.137 "3374198b-b6f2-4ea7-9365-36c919c1bb8e" 00:32:55.137 ], 00:32:55.137 "product_name": "Malloc disk", 00:32:55.137 "block_size": 512, 00:32:55.137 "num_blocks": 65536, 00:32:55.137 "uuid": "3374198b-b6f2-4ea7-9365-36c919c1bb8e", 00:32:55.137 "assigned_rate_limits": { 00:32:55.137 "rw_ios_per_sec": 0, 00:32:55.137 "rw_mbytes_per_sec": 0, 00:32:55.137 "r_mbytes_per_sec": 0, 00:32:55.137 "w_mbytes_per_sec": 0 00:32:55.137 }, 00:32:55.137 "claimed": true, 00:32:55.137 "claim_type": "exclusive_write", 00:32:55.137 "zoned": false, 00:32:55.137 "supported_io_types": { 00:32:55.137 "read": true, 00:32:55.137 "write": true, 00:32:55.137 "unmap": true, 00:32:55.137 "flush": true, 00:32:55.137 "reset": true, 00:32:55.137 "nvme_admin": false, 00:32:55.137 "nvme_io": false, 00:32:55.137 "nvme_io_md": false, 00:32:55.137 "write_zeroes": true, 00:32:55.137 "zcopy": true, 00:32:55.137 "get_zone_info": false, 00:32:55.137 "zone_management": false, 00:32:55.137 "zone_append": false, 00:32:55.137 "compare": false, 00:32:55.137 "compare_and_write": false, 00:32:55.137 "abort": true, 00:32:55.137 "seek_hole": false, 00:32:55.137 "seek_data": false, 00:32:55.137 "copy": true, 00:32:55.137 "nvme_iov_md": false 00:32:55.137 }, 00:32:55.137 "memory_domains": [ 00:32:55.137 { 00:32:55.137 "dma_device_id": "system", 00:32:55.137 "dma_device_type": 1 00:32:55.137 }, 00:32:55.137 { 00:32:55.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:55.137 "dma_device_type": 2 00:32:55.137 } 00:32:55.137 ], 00:32:55.137 "driver_specific": {} 00:32:55.137 } 00:32:55.137 ] 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:55.137 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.396 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:55.396 "name": "Existed_Raid", 00:32:55.396 "uuid": "204c16d5-bf26-4b1d-9b9c-b8ce751daafc", 00:32:55.396 "strip_size_kb": 64, 00:32:55.396 "state": "configuring", 00:32:55.396 "raid_level": "concat", 00:32:55.396 "superblock": true, 00:32:55.396 "num_base_bdevs": 2, 00:32:55.396 "num_base_bdevs_discovered": 1, 00:32:55.396 "num_base_bdevs_operational": 2, 00:32:55.396 "base_bdevs_list": [ 00:32:55.396 { 00:32:55.396 "name": "BaseBdev1", 00:32:55.396 "uuid": "3374198b-b6f2-4ea7-9365-36c919c1bb8e", 00:32:55.396 "is_configured": true, 00:32:55.396 "data_offset": 2048, 00:32:55.396 "data_size": 63488 00:32:55.396 }, 00:32:55.396 { 00:32:55.396 "name": "BaseBdev2", 00:32:55.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.396 "is_configured": false, 00:32:55.396 "data_offset": 0, 00:32:55.396 "data_size": 0 00:32:55.396 } 00:32:55.396 ] 00:32:55.396 }' 00:32:55.396 17:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:55.396 17:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.656 [2024-11-26 17:29:56.283686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:55.656 [2024-11-26 17:29:56.283805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.656 [2024-11-26 17:29:56.291749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:55.656 [2024-11-26 17:29:56.293775] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:55.656 [2024-11-26 17:29:56.293873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.656 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.915 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:55.915 "name": "Existed_Raid", 00:32:55.915 "uuid": "fd267d99-4253-432f-bcca-892b8ae8c778", 00:32:55.915 "strip_size_kb": 64, 00:32:55.915 "state": "configuring", 00:32:55.915 "raid_level": "concat", 00:32:55.915 "superblock": true, 00:32:55.915 "num_base_bdevs": 2, 00:32:55.915 "num_base_bdevs_discovered": 1, 00:32:55.915 "num_base_bdevs_operational": 2, 00:32:55.915 "base_bdevs_list": [ 00:32:55.915 { 00:32:55.915 "name": "BaseBdev1", 00:32:55.915 "uuid": "3374198b-b6f2-4ea7-9365-36c919c1bb8e", 00:32:55.915 "is_configured": true, 00:32:55.915 "data_offset": 2048, 00:32:55.915 "data_size": 63488 00:32:55.915 }, 00:32:55.915 { 00:32:55.915 "name": "BaseBdev2", 00:32:55.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.915 "is_configured": false, 00:32:55.915 "data_offset": 0, 00:32:55.915 "data_size": 0 00:32:55.915 } 00:32:55.915 ] 00:32:55.915 }' 00:32:55.915 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:55.915 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.175 [2024-11-26 17:29:56.765257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:56.175 BaseBdev2 00:32:56.175 [2024-11-26 17:29:56.765667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:56.175 [2024-11-26 17:29:56.765691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:56.175 [2024-11-26 17:29:56.765976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:56.175 [2024-11-26 17:29:56.766162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:56.175 [2024-11-26 17:29:56.766179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:56.175 [2024-11-26 17:29:56.766336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.175 [ 00:32:56.175 { 00:32:56.175 "name": "BaseBdev2", 00:32:56.175 "aliases": [ 00:32:56.175 "7ba6df64-068b-43bc-b15d-bab53898c88c" 00:32:56.175 ], 00:32:56.175 "product_name": "Malloc disk", 00:32:56.175 "block_size": 512, 00:32:56.175 "num_blocks": 65536, 00:32:56.175 "uuid": "7ba6df64-068b-43bc-b15d-bab53898c88c", 00:32:56.175 "assigned_rate_limits": { 00:32:56.175 "rw_ios_per_sec": 0, 00:32:56.175 "rw_mbytes_per_sec": 0, 00:32:56.175 "r_mbytes_per_sec": 0, 00:32:56.175 "w_mbytes_per_sec": 0 00:32:56.175 }, 00:32:56.175 "claimed": true, 00:32:56.175 "claim_type": "exclusive_write", 00:32:56.175 "zoned": false, 00:32:56.175 "supported_io_types": { 00:32:56.175 "read": true, 00:32:56.175 "write": true, 00:32:56.175 "unmap": true, 00:32:56.175 "flush": true, 00:32:56.175 "reset": true, 00:32:56.175 "nvme_admin": false, 00:32:56.175 "nvme_io": false, 00:32:56.175 "nvme_io_md": false, 00:32:56.175 "write_zeroes": true, 00:32:56.175 "zcopy": true, 00:32:56.175 "get_zone_info": false, 00:32:56.175 "zone_management": false, 00:32:56.175 "zone_append": false, 00:32:56.175 "compare": false, 00:32:56.175 "compare_and_write": false, 00:32:56.175 "abort": true, 00:32:56.175 "seek_hole": false, 00:32:56.175 "seek_data": false, 00:32:56.175 "copy": true, 00:32:56.175 "nvme_iov_md": false 00:32:56.175 }, 00:32:56.175 "memory_domains": [ 00:32:56.175 { 00:32:56.175 "dma_device_id": "system", 00:32:56.175 "dma_device_type": 1 00:32:56.175 }, 00:32:56.175 { 00:32:56.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:56.175 "dma_device_type": 2 00:32:56.175 } 00:32:56.175 ], 00:32:56.175 "driver_specific": {} 00:32:56.175 } 00:32:56.175 ] 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:56.175 "name": "Existed_Raid", 00:32:56.175 "uuid": "fd267d99-4253-432f-bcca-892b8ae8c778", 00:32:56.175 "strip_size_kb": 64, 00:32:56.175 "state": "online", 00:32:56.175 "raid_level": "concat", 00:32:56.175 "superblock": true, 00:32:56.175 "num_base_bdevs": 2, 00:32:56.175 "num_base_bdevs_discovered": 2, 00:32:56.175 "num_base_bdevs_operational": 2, 00:32:56.175 "base_bdevs_list": [ 00:32:56.175 { 00:32:56.175 "name": "BaseBdev1", 00:32:56.175 "uuid": "3374198b-b6f2-4ea7-9365-36c919c1bb8e", 00:32:56.175 "is_configured": true, 00:32:56.175 "data_offset": 2048, 00:32:56.175 "data_size": 63488 00:32:56.175 }, 00:32:56.175 { 00:32:56.175 "name": "BaseBdev2", 00:32:56.175 "uuid": "7ba6df64-068b-43bc-b15d-bab53898c88c", 00:32:56.175 "is_configured": true, 00:32:56.175 "data_offset": 2048, 00:32:56.175 "data_size": 63488 00:32:56.175 } 00:32:56.175 ] 00:32:56.175 }' 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:56.175 17:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.746 [2024-11-26 17:29:57.220878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.746 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:56.746 "name": "Existed_Raid", 00:32:56.746 "aliases": [ 00:32:56.746 "fd267d99-4253-432f-bcca-892b8ae8c778" 00:32:56.746 ], 00:32:56.746 "product_name": "Raid Volume", 00:32:56.746 "block_size": 512, 00:32:56.746 "num_blocks": 126976, 00:32:56.746 "uuid": "fd267d99-4253-432f-bcca-892b8ae8c778", 00:32:56.746 "assigned_rate_limits": { 00:32:56.746 "rw_ios_per_sec": 0, 00:32:56.746 "rw_mbytes_per_sec": 0, 00:32:56.746 "r_mbytes_per_sec": 0, 00:32:56.746 "w_mbytes_per_sec": 0 00:32:56.746 }, 00:32:56.746 "claimed": false, 00:32:56.746 "zoned": false, 00:32:56.746 "supported_io_types": { 00:32:56.746 "read": true, 00:32:56.746 "write": true, 00:32:56.746 "unmap": true, 00:32:56.746 "flush": true, 00:32:56.746 "reset": true, 00:32:56.746 "nvme_admin": false, 00:32:56.746 "nvme_io": false, 00:32:56.746 "nvme_io_md": false, 00:32:56.746 "write_zeroes": true, 00:32:56.746 "zcopy": false, 00:32:56.746 "get_zone_info": false, 00:32:56.746 "zone_management": false, 00:32:56.746 "zone_append": false, 00:32:56.746 "compare": false, 00:32:56.746 "compare_and_write": false, 00:32:56.747 "abort": false, 00:32:56.747 "seek_hole": false, 00:32:56.747 "seek_data": false, 00:32:56.747 "copy": false, 00:32:56.747 "nvme_iov_md": false 00:32:56.747 }, 00:32:56.747 "memory_domains": [ 00:32:56.747 { 00:32:56.747 "dma_device_id": "system", 00:32:56.747 "dma_device_type": 1 00:32:56.747 }, 00:32:56.747 { 00:32:56.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:56.747 "dma_device_type": 2 00:32:56.747 }, 00:32:56.747 { 00:32:56.747 "dma_device_id": "system", 00:32:56.747 "dma_device_type": 1 00:32:56.747 }, 00:32:56.747 { 00:32:56.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:56.747 "dma_device_type": 2 00:32:56.747 } 00:32:56.747 ], 00:32:56.747 "driver_specific": { 00:32:56.747 "raid": { 00:32:56.747 "uuid": "fd267d99-4253-432f-bcca-892b8ae8c778", 00:32:56.747 "strip_size_kb": 64, 00:32:56.747 "state": "online", 00:32:56.747 "raid_level": "concat", 00:32:56.747 "superblock": true, 00:32:56.747 "num_base_bdevs": 2, 00:32:56.747 "num_base_bdevs_discovered": 2, 00:32:56.747 "num_base_bdevs_operational": 2, 00:32:56.747 "base_bdevs_list": [ 00:32:56.747 { 00:32:56.747 "name": "BaseBdev1", 00:32:56.747 "uuid": "3374198b-b6f2-4ea7-9365-36c919c1bb8e", 00:32:56.747 "is_configured": true, 00:32:56.747 "data_offset": 2048, 00:32:56.747 "data_size": 63488 00:32:56.747 }, 00:32:56.747 { 00:32:56.747 "name": "BaseBdev2", 00:32:56.747 "uuid": "7ba6df64-068b-43bc-b15d-bab53898c88c", 00:32:56.747 "is_configured": true, 00:32:56.747 "data_offset": 2048, 00:32:56.747 "data_size": 63488 00:32:56.747 } 00:32:56.747 ] 00:32:56.747 } 00:32:56.747 } 00:32:56.747 }' 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:56.747 BaseBdev2' 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.747 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.747 [2024-11-26 17:29:57.424268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:56.747 [2024-11-26 17:29:57.424355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:56.747 [2024-11-26 17:29:57.424438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:57.007 "name": "Existed_Raid", 00:32:57.007 "uuid": "fd267d99-4253-432f-bcca-892b8ae8c778", 00:32:57.007 "strip_size_kb": 64, 00:32:57.007 "state": "offline", 00:32:57.007 "raid_level": "concat", 00:32:57.007 "superblock": true, 00:32:57.007 "num_base_bdevs": 2, 00:32:57.007 "num_base_bdevs_discovered": 1, 00:32:57.007 "num_base_bdevs_operational": 1, 00:32:57.007 "base_bdevs_list": [ 00:32:57.007 { 00:32:57.007 "name": null, 00:32:57.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:57.007 "is_configured": false, 00:32:57.007 "data_offset": 0, 00:32:57.007 "data_size": 63488 00:32:57.007 }, 00:32:57.007 { 00:32:57.007 "name": "BaseBdev2", 00:32:57.007 "uuid": "7ba6df64-068b-43bc-b15d-bab53898c88c", 00:32:57.007 "is_configured": true, 00:32:57.007 "data_offset": 2048, 00:32:57.007 "data_size": 63488 00:32:57.007 } 00:32:57.007 ] 00:32:57.007 }' 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:57.007 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.575 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:57.575 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:57.575 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.575 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.575 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.575 17:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:57.575 17:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.575 [2024-11-26 17:29:58.028375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:57.575 [2024-11-26 17:29:58.028477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62196 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62196 ']' 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62196 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62196 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.575 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62196' 00:32:57.576 killing process with pid 62196 00:32:57.576 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62196 00:32:57.576 [2024-11-26 17:29:58.228600] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:57.576 17:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62196 00:32:57.576 [2024-11-26 17:29:58.245718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:58.960 17:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:58.960 00:32:58.960 real 0m5.121s 00:32:58.960 user 0m7.402s 00:32:58.960 sys 0m0.781s 00:32:58.960 ************************************ 00:32:58.960 END TEST raid_state_function_test_sb 00:32:58.960 ************************************ 00:32:58.960 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:58.960 17:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:58.960 17:29:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:32:58.960 17:29:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:58.960 17:29:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:58.960 17:29:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:58.960 ************************************ 00:32:58.960 START TEST raid_superblock_test 00:32:58.960 ************************************ 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:58.960 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:58.961 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62445 00:32:58.961 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:58.961 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62445 00:32:58.961 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62445 ']' 00:32:58.961 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.961 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:58.961 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.961 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:58.961 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:58.961 [2024-11-26 17:29:59.595306] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:58.961 [2024-11-26 17:29:59.595548] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62445 ] 00:32:59.246 [2024-11-26 17:29:59.772940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.246 [2024-11-26 17:29:59.890325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.505 [2024-11-26 17:30:00.097767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:59.505 [2024-11-26 17:30:00.097898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.764 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.023 malloc1 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.023 [2024-11-26 17:30:00.503682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:00.023 [2024-11-26 17:30:00.503794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:00.023 [2024-11-26 17:30:00.503834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:00.023 [2024-11-26 17:30:00.503901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:00.023 [2024-11-26 17:30:00.506056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:00.023 [2024-11-26 17:30:00.506142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:00.023 pt1 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.023 malloc2 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.023 [2024-11-26 17:30:00.561054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:00.023 [2024-11-26 17:30:00.561168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:00.023 [2024-11-26 17:30:00.561215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:00.023 [2024-11-26 17:30:00.561256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:00.023 [2024-11-26 17:30:00.563392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:00.023 [2024-11-26 17:30:00.563466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:00.023 pt2 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.023 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.023 [2024-11-26 17:30:00.573091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:00.023 [2024-11-26 17:30:00.575032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:00.023 [2024-11-26 17:30:00.575271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:00.023 [2024-11-26 17:30:00.575325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:00.023 [2024-11-26 17:30:00.575715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:00.024 [2024-11-26 17:30:00.575946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:00.024 [2024-11-26 17:30:00.575989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:00.024 [2024-11-26 17:30:00.576203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:00.024 "name": "raid_bdev1", 00:33:00.024 "uuid": "a6778dc6-1936-4286-9659-69a79ea10d4c", 00:33:00.024 "strip_size_kb": 64, 00:33:00.024 "state": "online", 00:33:00.024 "raid_level": "concat", 00:33:00.024 "superblock": true, 00:33:00.024 "num_base_bdevs": 2, 00:33:00.024 "num_base_bdevs_discovered": 2, 00:33:00.024 "num_base_bdevs_operational": 2, 00:33:00.024 "base_bdevs_list": [ 00:33:00.024 { 00:33:00.024 "name": "pt1", 00:33:00.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:00.024 "is_configured": true, 00:33:00.024 "data_offset": 2048, 00:33:00.024 "data_size": 63488 00:33:00.024 }, 00:33:00.024 { 00:33:00.024 "name": "pt2", 00:33:00.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:00.024 "is_configured": true, 00:33:00.024 "data_offset": 2048, 00:33:00.024 "data_size": 63488 00:33:00.024 } 00:33:00.024 ] 00:33:00.024 }' 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:00.024 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.592 [2024-11-26 17:30:01.068563] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:00.592 "name": "raid_bdev1", 00:33:00.592 "aliases": [ 00:33:00.592 "a6778dc6-1936-4286-9659-69a79ea10d4c" 00:33:00.592 ], 00:33:00.592 "product_name": "Raid Volume", 00:33:00.592 "block_size": 512, 00:33:00.592 "num_blocks": 126976, 00:33:00.592 "uuid": "a6778dc6-1936-4286-9659-69a79ea10d4c", 00:33:00.592 "assigned_rate_limits": { 00:33:00.592 "rw_ios_per_sec": 0, 00:33:00.592 "rw_mbytes_per_sec": 0, 00:33:00.592 "r_mbytes_per_sec": 0, 00:33:00.592 "w_mbytes_per_sec": 0 00:33:00.592 }, 00:33:00.592 "claimed": false, 00:33:00.592 "zoned": false, 00:33:00.592 "supported_io_types": { 00:33:00.592 "read": true, 00:33:00.592 "write": true, 00:33:00.592 "unmap": true, 00:33:00.592 "flush": true, 00:33:00.592 "reset": true, 00:33:00.592 "nvme_admin": false, 00:33:00.592 "nvme_io": false, 00:33:00.592 "nvme_io_md": false, 00:33:00.592 "write_zeroes": true, 00:33:00.592 "zcopy": false, 00:33:00.592 "get_zone_info": false, 00:33:00.592 "zone_management": false, 00:33:00.592 "zone_append": false, 00:33:00.592 "compare": false, 00:33:00.592 "compare_and_write": false, 00:33:00.592 "abort": false, 00:33:00.592 "seek_hole": false, 00:33:00.592 "seek_data": false, 00:33:00.592 "copy": false, 00:33:00.592 "nvme_iov_md": false 00:33:00.592 }, 00:33:00.592 "memory_domains": [ 00:33:00.592 { 00:33:00.592 "dma_device_id": "system", 00:33:00.592 "dma_device_type": 1 00:33:00.592 }, 00:33:00.592 { 00:33:00.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.592 "dma_device_type": 2 00:33:00.592 }, 00:33:00.592 { 00:33:00.592 "dma_device_id": "system", 00:33:00.592 "dma_device_type": 1 00:33:00.592 }, 00:33:00.592 { 00:33:00.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.592 "dma_device_type": 2 00:33:00.592 } 00:33:00.592 ], 00:33:00.592 "driver_specific": { 00:33:00.592 "raid": { 00:33:00.592 "uuid": "a6778dc6-1936-4286-9659-69a79ea10d4c", 00:33:00.592 "strip_size_kb": 64, 00:33:00.592 "state": "online", 00:33:00.592 "raid_level": "concat", 00:33:00.592 "superblock": true, 00:33:00.592 "num_base_bdevs": 2, 00:33:00.592 "num_base_bdevs_discovered": 2, 00:33:00.592 "num_base_bdevs_operational": 2, 00:33:00.592 "base_bdevs_list": [ 00:33:00.592 { 00:33:00.592 "name": "pt1", 00:33:00.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:00.592 "is_configured": true, 00:33:00.592 "data_offset": 2048, 00:33:00.592 "data_size": 63488 00:33:00.592 }, 00:33:00.592 { 00:33:00.592 "name": "pt2", 00:33:00.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:00.592 "is_configured": true, 00:33:00.592 "data_offset": 2048, 00:33:00.592 "data_size": 63488 00:33:00.592 } 00:33:00.592 ] 00:33:00.592 } 00:33:00.592 } 00:33:00.592 }' 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:00.592 pt2' 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:00.592 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.593 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:00.593 [2024-11-26 17:30:01.280155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a6778dc6-1936-4286-9659-69a79ea10d4c 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a6778dc6-1936-4286-9659-69a79ea10d4c ']' 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.853 [2024-11-26 17:30:01.327742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:00.853 [2024-11-26 17:30:01.327821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:00.853 [2024-11-26 17:30:01.327945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:00.853 [2024-11-26 17:30:01.328026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:00.853 [2024-11-26 17:30:01.328075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.853 [2024-11-26 17:30:01.451679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:00.853 [2024-11-26 17:30:01.453773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:00.853 [2024-11-26 17:30:01.453894] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:00.853 [2024-11-26 17:30:01.453993] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:00.853 [2024-11-26 17:30:01.454012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:00.853 [2024-11-26 17:30:01.454024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:00.853 request: 00:33:00.853 { 00:33:00.853 "name": "raid_bdev1", 00:33:00.853 "raid_level": "concat", 00:33:00.853 "base_bdevs": [ 00:33:00.853 "malloc1", 00:33:00.853 "malloc2" 00:33:00.853 ], 00:33:00.853 "strip_size_kb": 64, 00:33:00.853 "superblock": false, 00:33:00.853 "method": "bdev_raid_create", 00:33:00.853 "req_id": 1 00:33:00.853 } 00:33:00.853 Got JSON-RPC error response 00:33:00.853 response: 00:33:00.853 { 00:33:00.853 "code": -17, 00:33:00.853 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:00.853 } 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.853 [2024-11-26 17:30:01.515669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:00.853 [2024-11-26 17:30:01.515773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:00.853 [2024-11-26 17:30:01.515811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:00.853 [2024-11-26 17:30:01.515864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:00.853 [2024-11-26 17:30:01.518138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:00.853 [2024-11-26 17:30:01.518212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:00.853 [2024-11-26 17:30:01.518320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:00.853 [2024-11-26 17:30:01.518409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:00.853 pt1 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:00.853 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:00.854 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:00.854 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:00.854 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:00.854 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.854 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.854 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.854 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.113 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.113 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:01.113 "name": "raid_bdev1", 00:33:01.113 "uuid": "a6778dc6-1936-4286-9659-69a79ea10d4c", 00:33:01.113 "strip_size_kb": 64, 00:33:01.113 "state": "configuring", 00:33:01.113 "raid_level": "concat", 00:33:01.113 "superblock": true, 00:33:01.113 "num_base_bdevs": 2, 00:33:01.113 "num_base_bdevs_discovered": 1, 00:33:01.113 "num_base_bdevs_operational": 2, 00:33:01.113 "base_bdevs_list": [ 00:33:01.113 { 00:33:01.113 "name": "pt1", 00:33:01.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:01.113 "is_configured": true, 00:33:01.113 "data_offset": 2048, 00:33:01.113 "data_size": 63488 00:33:01.113 }, 00:33:01.113 { 00:33:01.113 "name": null, 00:33:01.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:01.113 "is_configured": false, 00:33:01.113 "data_offset": 2048, 00:33:01.113 "data_size": 63488 00:33:01.113 } 00:33:01.113 ] 00:33:01.113 }' 00:33:01.113 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:01.113 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.398 [2024-11-26 17:30:01.935698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:01.398 [2024-11-26 17:30:01.935846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:01.398 [2024-11-26 17:30:01.935893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:01.398 [2024-11-26 17:30:01.935945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:01.398 [2024-11-26 17:30:01.936500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:01.398 [2024-11-26 17:30:01.936586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:01.398 [2024-11-26 17:30:01.936724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:01.398 [2024-11-26 17:30:01.936785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:01.398 [2024-11-26 17:30:01.936952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:01.398 [2024-11-26 17:30:01.936997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:01.398 [2024-11-26 17:30:01.937282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:01.398 [2024-11-26 17:30:01.937488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:01.398 [2024-11-26 17:30:01.937548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:01.398 [2024-11-26 17:30:01.937745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:01.398 pt2 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:01.398 "name": "raid_bdev1", 00:33:01.398 "uuid": "a6778dc6-1936-4286-9659-69a79ea10d4c", 00:33:01.398 "strip_size_kb": 64, 00:33:01.398 "state": "online", 00:33:01.398 "raid_level": "concat", 00:33:01.398 "superblock": true, 00:33:01.398 "num_base_bdevs": 2, 00:33:01.398 "num_base_bdevs_discovered": 2, 00:33:01.398 "num_base_bdevs_operational": 2, 00:33:01.398 "base_bdevs_list": [ 00:33:01.398 { 00:33:01.398 "name": "pt1", 00:33:01.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:01.398 "is_configured": true, 00:33:01.398 "data_offset": 2048, 00:33:01.398 "data_size": 63488 00:33:01.398 }, 00:33:01.398 { 00:33:01.398 "name": "pt2", 00:33:01.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:01.398 "is_configured": true, 00:33:01.398 "data_offset": 2048, 00:33:01.398 "data_size": 63488 00:33:01.398 } 00:33:01.398 ] 00:33:01.398 }' 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:01.398 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:01.656 [2024-11-26 17:30:02.335940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:01.656 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:01.914 "name": "raid_bdev1", 00:33:01.914 "aliases": [ 00:33:01.914 "a6778dc6-1936-4286-9659-69a79ea10d4c" 00:33:01.914 ], 00:33:01.914 "product_name": "Raid Volume", 00:33:01.914 "block_size": 512, 00:33:01.914 "num_blocks": 126976, 00:33:01.914 "uuid": "a6778dc6-1936-4286-9659-69a79ea10d4c", 00:33:01.914 "assigned_rate_limits": { 00:33:01.914 "rw_ios_per_sec": 0, 00:33:01.914 "rw_mbytes_per_sec": 0, 00:33:01.914 "r_mbytes_per_sec": 0, 00:33:01.914 "w_mbytes_per_sec": 0 00:33:01.914 }, 00:33:01.914 "claimed": false, 00:33:01.914 "zoned": false, 00:33:01.914 "supported_io_types": { 00:33:01.914 "read": true, 00:33:01.914 "write": true, 00:33:01.914 "unmap": true, 00:33:01.914 "flush": true, 00:33:01.914 "reset": true, 00:33:01.914 "nvme_admin": false, 00:33:01.914 "nvme_io": false, 00:33:01.914 "nvme_io_md": false, 00:33:01.914 "write_zeroes": true, 00:33:01.914 "zcopy": false, 00:33:01.914 "get_zone_info": false, 00:33:01.914 "zone_management": false, 00:33:01.914 "zone_append": false, 00:33:01.914 "compare": false, 00:33:01.914 "compare_and_write": false, 00:33:01.914 "abort": false, 00:33:01.914 "seek_hole": false, 00:33:01.914 "seek_data": false, 00:33:01.914 "copy": false, 00:33:01.914 "nvme_iov_md": false 00:33:01.914 }, 00:33:01.914 "memory_domains": [ 00:33:01.914 { 00:33:01.914 "dma_device_id": "system", 00:33:01.914 "dma_device_type": 1 00:33:01.914 }, 00:33:01.914 { 00:33:01.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.914 "dma_device_type": 2 00:33:01.914 }, 00:33:01.914 { 00:33:01.914 "dma_device_id": "system", 00:33:01.914 "dma_device_type": 1 00:33:01.914 }, 00:33:01.914 { 00:33:01.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.914 "dma_device_type": 2 00:33:01.914 } 00:33:01.914 ], 00:33:01.914 "driver_specific": { 00:33:01.914 "raid": { 00:33:01.914 "uuid": "a6778dc6-1936-4286-9659-69a79ea10d4c", 00:33:01.914 "strip_size_kb": 64, 00:33:01.914 "state": "online", 00:33:01.914 "raid_level": "concat", 00:33:01.914 "superblock": true, 00:33:01.914 "num_base_bdevs": 2, 00:33:01.914 "num_base_bdevs_discovered": 2, 00:33:01.914 "num_base_bdevs_operational": 2, 00:33:01.914 "base_bdevs_list": [ 00:33:01.914 { 00:33:01.914 "name": "pt1", 00:33:01.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:01.914 "is_configured": true, 00:33:01.914 "data_offset": 2048, 00:33:01.914 "data_size": 63488 00:33:01.914 }, 00:33:01.914 { 00:33:01.914 "name": "pt2", 00:33:01.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:01.914 "is_configured": true, 00:33:01.914 "data_offset": 2048, 00:33:01.914 "data_size": 63488 00:33:01.914 } 00:33:01.914 ] 00:33:01.914 } 00:33:01.914 } 00:33:01.914 }' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:01.914 pt2' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.914 [2024-11-26 17:30:02.575982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a6778dc6-1936-4286-9659-69a79ea10d4c '!=' a6778dc6-1936-4286-9659-69a79ea10d4c ']' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62445 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62445 ']' 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62445 00:33:01.914 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:33:02.172 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:02.172 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62445 00:33:02.172 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:02.172 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:02.172 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62445' 00:33:02.172 killing process with pid 62445 00:33:02.172 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62445 00:33:02.172 [2024-11-26 17:30:02.648825] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:02.172 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62445 00:33:02.172 [2024-11-26 17:30:02.649033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:02.172 [2024-11-26 17:30:02.649088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:02.172 [2024-11-26 17:30:02.649100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:02.430 [2024-11-26 17:30:02.887443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:03.803 ************************************ 00:33:03.803 END TEST raid_superblock_test 00:33:03.803 ************************************ 00:33:03.803 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:33:03.803 00:33:03.803 real 0m4.581s 00:33:03.803 user 0m6.327s 00:33:03.803 sys 0m0.778s 00:33:03.803 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.803 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.803 17:30:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:33:03.803 17:30:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:03.803 17:30:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.803 17:30:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:03.803 ************************************ 00:33:03.803 START TEST raid_read_error_test 00:33:03.803 ************************************ 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Lv8swoWVty 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62656 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62656 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62656 ']' 00:33:03.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.803 17:30:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.803 [2024-11-26 17:30:04.242729] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:03.803 [2024-11-26 17:30:04.242853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62656 ] 00:33:03.803 [2024-11-26 17:30:04.401777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.061 [2024-11-26 17:30:04.524636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.061 [2024-11-26 17:30:04.738008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:04.061 [2024-11-26 17:30:04.738078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.628 BaseBdev1_malloc 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.628 true 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.628 [2024-11-26 17:30:05.161503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:33:04.628 [2024-11-26 17:30:05.161573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.628 [2024-11-26 17:30:05.161594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:04.628 [2024-11-26 17:30:05.161606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.628 [2024-11-26 17:30:05.163907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.628 [2024-11-26 17:30:05.164028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:04.628 BaseBdev1 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.628 BaseBdev2_malloc 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.628 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.628 true 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.629 [2024-11-26 17:30:05.225933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:04.629 [2024-11-26 17:30:05.226065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.629 [2024-11-26 17:30:05.226101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:04.629 [2024-11-26 17:30:05.226133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.629 [2024-11-26 17:30:05.228402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.629 [2024-11-26 17:30:05.228504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:04.629 BaseBdev2 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.629 [2024-11-26 17:30:05.237979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:04.629 [2024-11-26 17:30:05.239848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:04.629 [2024-11-26 17:30:05.240109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:04.629 [2024-11-26 17:30:05.240170] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:04.629 [2024-11-26 17:30:05.240461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:04.629 [2024-11-26 17:30:05.240726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:04.629 [2024-11-26 17:30:05.240778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:04.629 [2024-11-26 17:30:05.240991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:04.629 "name": "raid_bdev1", 00:33:04.629 "uuid": "4d484510-0d16-4a59-b7bf-e671ba9f4d8e", 00:33:04.629 "strip_size_kb": 64, 00:33:04.629 "state": "online", 00:33:04.629 "raid_level": "concat", 00:33:04.629 "superblock": true, 00:33:04.629 "num_base_bdevs": 2, 00:33:04.629 "num_base_bdevs_discovered": 2, 00:33:04.629 "num_base_bdevs_operational": 2, 00:33:04.629 "base_bdevs_list": [ 00:33:04.629 { 00:33:04.629 "name": "BaseBdev1", 00:33:04.629 "uuid": "d574caa8-8fd9-5196-8cf6-61284ba54d46", 00:33:04.629 "is_configured": true, 00:33:04.629 "data_offset": 2048, 00:33:04.629 "data_size": 63488 00:33:04.629 }, 00:33:04.629 { 00:33:04.629 "name": "BaseBdev2", 00:33:04.629 "uuid": "ee8b6b7b-f606-5398-afab-e7c8a49c2e32", 00:33:04.629 "is_configured": true, 00:33:04.629 "data_offset": 2048, 00:33:04.629 "data_size": 63488 00:33:04.629 } 00:33:04.629 ] 00:33:04.629 }' 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:04.629 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.197 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:05.197 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:05.197 [2024-11-26 17:30:05.810288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:06.135 "name": "raid_bdev1", 00:33:06.135 "uuid": "4d484510-0d16-4a59-b7bf-e671ba9f4d8e", 00:33:06.135 "strip_size_kb": 64, 00:33:06.135 "state": "online", 00:33:06.135 "raid_level": "concat", 00:33:06.135 "superblock": true, 00:33:06.135 "num_base_bdevs": 2, 00:33:06.135 "num_base_bdevs_discovered": 2, 00:33:06.135 "num_base_bdevs_operational": 2, 00:33:06.135 "base_bdevs_list": [ 00:33:06.135 { 00:33:06.135 "name": "BaseBdev1", 00:33:06.135 "uuid": "d574caa8-8fd9-5196-8cf6-61284ba54d46", 00:33:06.135 "is_configured": true, 00:33:06.135 "data_offset": 2048, 00:33:06.135 "data_size": 63488 00:33:06.135 }, 00:33:06.135 { 00:33:06.135 "name": "BaseBdev2", 00:33:06.135 "uuid": "ee8b6b7b-f606-5398-afab-e7c8a49c2e32", 00:33:06.135 "is_configured": true, 00:33:06.135 "data_offset": 2048, 00:33:06.135 "data_size": 63488 00:33:06.135 } 00:33:06.135 ] 00:33:06.135 }' 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:06.135 17:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.796 17:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:06.796 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.796 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.796 [2024-11-26 17:30:07.186863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:06.796 [2024-11-26 17:30:07.186953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:06.796 [2024-11-26 17:30:07.189990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:06.796 [2024-11-26 17:30:07.190077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:06.796 [2024-11-26 17:30:07.190126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:06.796 [2024-11-26 17:30:07.190204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:06.796 { 00:33:06.796 "results": [ 00:33:06.796 { 00:33:06.796 "job": "raid_bdev1", 00:33:06.797 "core_mask": "0x1", 00:33:06.797 "workload": "randrw", 00:33:06.797 "percentage": 50, 00:33:06.797 "status": "finished", 00:33:06.797 "queue_depth": 1, 00:33:06.797 "io_size": 131072, 00:33:06.797 "runtime": 1.377496, 00:33:06.797 "iops": 14924.18126803998, 00:33:06.797 "mibps": 1865.5226585049975, 00:33:06.797 "io_failed": 1, 00:33:06.797 "io_timeout": 0, 00:33:06.797 "avg_latency_us": 92.5273297789661, 00:33:06.797 "min_latency_us": 27.72401746724891, 00:33:06.797 "max_latency_us": 1538.235807860262 00:33:06.797 } 00:33:06.797 ], 00:33:06.797 "core_count": 1 00:33:06.797 } 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62656 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62656 ']' 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62656 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62656 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62656' 00:33:06.797 killing process with pid 62656 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62656 00:33:06.797 [2024-11-26 17:30:07.235253] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:06.797 17:30:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62656 00:33:06.797 [2024-11-26 17:30:07.383441] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:08.177 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Lv8swoWVty 00:33:08.177 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:08.177 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:08.177 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:33:08.177 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:33:08.177 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:08.177 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:33:08.177 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:33:08.177 ************************************ 00:33:08.177 END TEST raid_read_error_test 00:33:08.177 ************************************ 00:33:08.177 00:33:08.177 real 0m4.511s 00:33:08.177 user 0m5.443s 00:33:08.177 sys 0m0.545s 00:33:08.177 17:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.177 17:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.177 17:30:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:33:08.177 17:30:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:08.177 17:30:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.177 17:30:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:08.177 ************************************ 00:33:08.177 START TEST raid_write_error_test 00:33:08.177 ************************************ 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.w2dH5K81FA 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62802 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62802 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62802 ']' 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.177 17:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.177 [2024-11-26 17:30:08.822939] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:08.177 [2024-11-26 17:30:08.823060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62802 ] 00:33:08.437 [2024-11-26 17:30:09.000487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.437 [2024-11-26 17:30:09.121231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.715 [2024-11-26 17:30:09.328825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:08.715 [2024-11-26 17:30:09.328864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.289 BaseBdev1_malloc 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.289 true 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.289 [2024-11-26 17:30:09.736044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:33:09.289 [2024-11-26 17:30:09.736149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:09.289 [2024-11-26 17:30:09.736186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:09.289 [2024-11-26 17:30:09.736218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:09.289 [2024-11-26 17:30:09.738293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:09.289 [2024-11-26 17:30:09.738370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:09.289 BaseBdev1 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.289 BaseBdev2_malloc 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.289 true 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.289 [2024-11-26 17:30:09.804620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:09.289 [2024-11-26 17:30:09.804718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:09.289 [2024-11-26 17:30:09.804751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:09.289 [2024-11-26 17:30:09.804781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:09.289 [2024-11-26 17:30:09.806946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:09.289 [2024-11-26 17:30:09.807016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:09.289 BaseBdev2 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.289 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.290 [2024-11-26 17:30:09.816674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:09.290 [2024-11-26 17:30:09.818492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:09.290 [2024-11-26 17:30:09.818740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:09.290 [2024-11-26 17:30:09.818787] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:09.290 [2024-11-26 17:30:09.819031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:09.290 [2024-11-26 17:30:09.819241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:09.290 [2024-11-26 17:30:09.819286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:09.290 [2024-11-26 17:30:09.819482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:09.290 "name": "raid_bdev1", 00:33:09.290 "uuid": "4fd810b6-af2b-4850-adc9-36b4c5182cfd", 00:33:09.290 "strip_size_kb": 64, 00:33:09.290 "state": "online", 00:33:09.290 "raid_level": "concat", 00:33:09.290 "superblock": true, 00:33:09.290 "num_base_bdevs": 2, 00:33:09.290 "num_base_bdevs_discovered": 2, 00:33:09.290 "num_base_bdevs_operational": 2, 00:33:09.290 "base_bdevs_list": [ 00:33:09.290 { 00:33:09.290 "name": "BaseBdev1", 00:33:09.290 "uuid": "47f6878f-10c2-5e86-9b55-f01872a1bc4e", 00:33:09.290 "is_configured": true, 00:33:09.290 "data_offset": 2048, 00:33:09.290 "data_size": 63488 00:33:09.290 }, 00:33:09.290 { 00:33:09.290 "name": "BaseBdev2", 00:33:09.290 "uuid": "4c807871-a67b-59df-8a49-4d4b456d1e37", 00:33:09.290 "is_configured": true, 00:33:09.290 "data_offset": 2048, 00:33:09.290 "data_size": 63488 00:33:09.290 } 00:33:09.290 ] 00:33:09.290 }' 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:09.290 17:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.861 17:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:09.861 17:30:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:09.861 [2024-11-26 17:30:10.341250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:10.798 "name": "raid_bdev1", 00:33:10.798 "uuid": "4fd810b6-af2b-4850-adc9-36b4c5182cfd", 00:33:10.798 "strip_size_kb": 64, 00:33:10.798 "state": "online", 00:33:10.798 "raid_level": "concat", 00:33:10.798 "superblock": true, 00:33:10.798 "num_base_bdevs": 2, 00:33:10.798 "num_base_bdevs_discovered": 2, 00:33:10.798 "num_base_bdevs_operational": 2, 00:33:10.798 "base_bdevs_list": [ 00:33:10.798 { 00:33:10.798 "name": "BaseBdev1", 00:33:10.798 "uuid": "47f6878f-10c2-5e86-9b55-f01872a1bc4e", 00:33:10.798 "is_configured": true, 00:33:10.798 "data_offset": 2048, 00:33:10.798 "data_size": 63488 00:33:10.798 }, 00:33:10.798 { 00:33:10.798 "name": "BaseBdev2", 00:33:10.798 "uuid": "4c807871-a67b-59df-8a49-4d4b456d1e37", 00:33:10.798 "is_configured": true, 00:33:10.798 "data_offset": 2048, 00:33:10.798 "data_size": 63488 00:33:10.798 } 00:33:10.798 ] 00:33:10.798 }' 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:10.798 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.059 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:11.059 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.059 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.059 [2024-11-26 17:30:11.734487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:11.059 [2024-11-26 17:30:11.734607] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:11.059 [2024-11-26 17:30:11.737718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:11.059 [2024-11-26 17:30:11.737808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:11.059 [2024-11-26 17:30:11.737859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:11.059 [2024-11-26 17:30:11.737904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:11.059 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.059 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62802 00:33:11.059 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62802 ']' 00:33:11.059 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62802 00:33:11.059 { 00:33:11.059 "results": [ 00:33:11.059 { 00:33:11.059 "job": "raid_bdev1", 00:33:11.059 "core_mask": "0x1", 00:33:11.059 "workload": "randrw", 00:33:11.059 "percentage": 50, 00:33:11.059 "status": "finished", 00:33:11.059 "queue_depth": 1, 00:33:11.059 "io_size": 131072, 00:33:11.059 "runtime": 1.393396, 00:33:11.059 "iops": 14435.235927187963, 00:33:11.059 "mibps": 1804.4044908984954, 00:33:11.059 "io_failed": 1, 00:33:11.059 "io_timeout": 0, 00:33:11.059 "avg_latency_us": 95.6765817510016, 00:33:11.059 "min_latency_us": 27.053275109170304, 00:33:11.059 "max_latency_us": 1495.3082969432314 00:33:11.059 } 00:33:11.059 ], 00:33:11.059 "core_count": 1 00:33:11.059 } 00:33:11.059 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:33:11.059 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:11.059 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62802 00:33:11.319 killing process with pid 62802 00:33:11.319 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:11.319 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:11.319 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62802' 00:33:11.319 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62802 00:33:11.319 [2024-11-26 17:30:11.781791] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:11.319 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62802 00:33:11.319 [2024-11-26 17:30:11.928929] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:12.700 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.w2dH5K81FA 00:33:12.700 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:12.700 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:12.700 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:33:12.700 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:33:12.700 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:12.700 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:33:12.700 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:33:12.700 00:33:12.700 real 0m4.492s 00:33:12.700 user 0m5.381s 00:33:12.700 sys 0m0.553s 00:33:12.700 17:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:12.700 ************************************ 00:33:12.700 END TEST raid_write_error_test 00:33:12.700 ************************************ 00:33:12.700 17:30:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:12.700 17:30:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:33:12.700 17:30:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:33:12.700 17:30:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:12.700 17:30:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:12.700 17:30:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:12.700 ************************************ 00:33:12.700 START TEST raid_state_function_test 00:33:12.700 ************************************ 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62940 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62940' 00:33:12.701 Process raid pid: 62940 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62940 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62940 ']' 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.701 17:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:12.701 [2024-11-26 17:30:13.378672] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:12.701 [2024-11-26 17:30:13.378797] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.960 [2024-11-26 17:30:13.560249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.220 [2024-11-26 17:30:13.677082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.220 [2024-11-26 17:30:13.892943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:13.220 [2024-11-26 17:30:13.892992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:13.790 [2024-11-26 17:30:14.255199] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:13.790 [2024-11-26 17:30:14.255379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:13.790 [2024-11-26 17:30:14.255439] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:13.790 [2024-11-26 17:30:14.255486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:13.790 "name": "Existed_Raid", 00:33:13.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.790 "strip_size_kb": 0, 00:33:13.790 "state": "configuring", 00:33:13.790 "raid_level": "raid1", 00:33:13.790 "superblock": false, 00:33:13.790 "num_base_bdevs": 2, 00:33:13.790 "num_base_bdevs_discovered": 0, 00:33:13.790 "num_base_bdevs_operational": 2, 00:33:13.790 "base_bdevs_list": [ 00:33:13.790 { 00:33:13.790 "name": "BaseBdev1", 00:33:13.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.790 "is_configured": false, 00:33:13.790 "data_offset": 0, 00:33:13.790 "data_size": 0 00:33:13.790 }, 00:33:13.790 { 00:33:13.790 "name": "BaseBdev2", 00:33:13.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.790 "is_configured": false, 00:33:13.790 "data_offset": 0, 00:33:13.790 "data_size": 0 00:33:13.790 } 00:33:13.790 ] 00:33:13.790 }' 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:13.790 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.050 [2024-11-26 17:30:14.702388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:14.050 [2024-11-26 17:30:14.702499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.050 [2024-11-26 17:30:14.714365] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:14.050 [2024-11-26 17:30:14.714479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:14.050 [2024-11-26 17:30:14.714524] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:14.050 [2024-11-26 17:30:14.714555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.050 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.310 [2024-11-26 17:30:14.763997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:14.310 BaseBdev1 00:33:14.310 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.311 [ 00:33:14.311 { 00:33:14.311 "name": "BaseBdev1", 00:33:14.311 "aliases": [ 00:33:14.311 "44a51d98-b92a-49ab-bb7a-52190d1b87b5" 00:33:14.311 ], 00:33:14.311 "product_name": "Malloc disk", 00:33:14.311 "block_size": 512, 00:33:14.311 "num_blocks": 65536, 00:33:14.311 "uuid": "44a51d98-b92a-49ab-bb7a-52190d1b87b5", 00:33:14.311 "assigned_rate_limits": { 00:33:14.311 "rw_ios_per_sec": 0, 00:33:14.311 "rw_mbytes_per_sec": 0, 00:33:14.311 "r_mbytes_per_sec": 0, 00:33:14.311 "w_mbytes_per_sec": 0 00:33:14.311 }, 00:33:14.311 "claimed": true, 00:33:14.311 "claim_type": "exclusive_write", 00:33:14.311 "zoned": false, 00:33:14.311 "supported_io_types": { 00:33:14.311 "read": true, 00:33:14.311 "write": true, 00:33:14.311 "unmap": true, 00:33:14.311 "flush": true, 00:33:14.311 "reset": true, 00:33:14.311 "nvme_admin": false, 00:33:14.311 "nvme_io": false, 00:33:14.311 "nvme_io_md": false, 00:33:14.311 "write_zeroes": true, 00:33:14.311 "zcopy": true, 00:33:14.311 "get_zone_info": false, 00:33:14.311 "zone_management": false, 00:33:14.311 "zone_append": false, 00:33:14.311 "compare": false, 00:33:14.311 "compare_and_write": false, 00:33:14.311 "abort": true, 00:33:14.311 "seek_hole": false, 00:33:14.311 "seek_data": false, 00:33:14.311 "copy": true, 00:33:14.311 "nvme_iov_md": false 00:33:14.311 }, 00:33:14.311 "memory_domains": [ 00:33:14.311 { 00:33:14.311 "dma_device_id": "system", 00:33:14.311 "dma_device_type": 1 00:33:14.311 }, 00:33:14.311 { 00:33:14.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:14.311 "dma_device_type": 2 00:33:14.311 } 00:33:14.311 ], 00:33:14.311 "driver_specific": {} 00:33:14.311 } 00:33:14.311 ] 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:14.311 "name": "Existed_Raid", 00:33:14.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.311 "strip_size_kb": 0, 00:33:14.311 "state": "configuring", 00:33:14.311 "raid_level": "raid1", 00:33:14.311 "superblock": false, 00:33:14.311 "num_base_bdevs": 2, 00:33:14.311 "num_base_bdevs_discovered": 1, 00:33:14.311 "num_base_bdevs_operational": 2, 00:33:14.311 "base_bdevs_list": [ 00:33:14.311 { 00:33:14.311 "name": "BaseBdev1", 00:33:14.311 "uuid": "44a51d98-b92a-49ab-bb7a-52190d1b87b5", 00:33:14.311 "is_configured": true, 00:33:14.311 "data_offset": 0, 00:33:14.311 "data_size": 65536 00:33:14.311 }, 00:33:14.311 { 00:33:14.311 "name": "BaseBdev2", 00:33:14.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.311 "is_configured": false, 00:33:14.311 "data_offset": 0, 00:33:14.311 "data_size": 0 00:33:14.311 } 00:33:14.311 ] 00:33:14.311 }' 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:14.311 17:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.882 [2024-11-26 17:30:15.271702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:14.882 [2024-11-26 17:30:15.271821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.882 [2024-11-26 17:30:15.283797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:14.882 [2024-11-26 17:30:15.285908] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:14.882 [2024-11-26 17:30:15.285968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:14.882 "name": "Existed_Raid", 00:33:14.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.882 "strip_size_kb": 0, 00:33:14.882 "state": "configuring", 00:33:14.882 "raid_level": "raid1", 00:33:14.882 "superblock": false, 00:33:14.882 "num_base_bdevs": 2, 00:33:14.882 "num_base_bdevs_discovered": 1, 00:33:14.882 "num_base_bdevs_operational": 2, 00:33:14.882 "base_bdevs_list": [ 00:33:14.882 { 00:33:14.882 "name": "BaseBdev1", 00:33:14.882 "uuid": "44a51d98-b92a-49ab-bb7a-52190d1b87b5", 00:33:14.882 "is_configured": true, 00:33:14.882 "data_offset": 0, 00:33:14.882 "data_size": 65536 00:33:14.882 }, 00:33:14.882 { 00:33:14.882 "name": "BaseBdev2", 00:33:14.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.882 "is_configured": false, 00:33:14.882 "data_offset": 0, 00:33:14.882 "data_size": 0 00:33:14.882 } 00:33:14.882 ] 00:33:14.882 }' 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:14.882 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.141 [2024-11-26 17:30:15.818448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:15.141 [2024-11-26 17:30:15.818656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:15.141 [2024-11-26 17:30:15.818689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:15.141 [2024-11-26 17:30:15.819027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:15.141 [2024-11-26 17:30:15.819293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:15.141 [2024-11-26 17:30:15.819349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:15.141 [2024-11-26 17:30:15.819729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:15.141 BaseBdev2 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.141 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.401 [ 00:33:15.401 { 00:33:15.401 "name": "BaseBdev2", 00:33:15.401 "aliases": [ 00:33:15.401 "22339d19-0b1e-4d4f-8b5d-e5db060d5df9" 00:33:15.401 ], 00:33:15.401 "product_name": "Malloc disk", 00:33:15.401 "block_size": 512, 00:33:15.401 "num_blocks": 65536, 00:33:15.401 "uuid": "22339d19-0b1e-4d4f-8b5d-e5db060d5df9", 00:33:15.401 "assigned_rate_limits": { 00:33:15.401 "rw_ios_per_sec": 0, 00:33:15.401 "rw_mbytes_per_sec": 0, 00:33:15.401 "r_mbytes_per_sec": 0, 00:33:15.401 "w_mbytes_per_sec": 0 00:33:15.401 }, 00:33:15.401 "claimed": true, 00:33:15.401 "claim_type": "exclusive_write", 00:33:15.401 "zoned": false, 00:33:15.401 "supported_io_types": { 00:33:15.401 "read": true, 00:33:15.401 "write": true, 00:33:15.401 "unmap": true, 00:33:15.401 "flush": true, 00:33:15.401 "reset": true, 00:33:15.401 "nvme_admin": false, 00:33:15.401 "nvme_io": false, 00:33:15.401 "nvme_io_md": false, 00:33:15.401 "write_zeroes": true, 00:33:15.401 "zcopy": true, 00:33:15.401 "get_zone_info": false, 00:33:15.401 "zone_management": false, 00:33:15.401 "zone_append": false, 00:33:15.401 "compare": false, 00:33:15.401 "compare_and_write": false, 00:33:15.401 "abort": true, 00:33:15.401 "seek_hole": false, 00:33:15.401 "seek_data": false, 00:33:15.401 "copy": true, 00:33:15.401 "nvme_iov_md": false 00:33:15.401 }, 00:33:15.401 "memory_domains": [ 00:33:15.401 { 00:33:15.401 "dma_device_id": "system", 00:33:15.401 "dma_device_type": 1 00:33:15.401 }, 00:33:15.401 { 00:33:15.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.401 "dma_device_type": 2 00:33:15.401 } 00:33:15.401 ], 00:33:15.401 "driver_specific": {} 00:33:15.401 } 00:33:15.401 ] 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:15.401 "name": "Existed_Raid", 00:33:15.401 "uuid": "0ccc1ec7-17f0-434f-b119-f816e38cd54e", 00:33:15.401 "strip_size_kb": 0, 00:33:15.401 "state": "online", 00:33:15.401 "raid_level": "raid1", 00:33:15.401 "superblock": false, 00:33:15.401 "num_base_bdevs": 2, 00:33:15.401 "num_base_bdevs_discovered": 2, 00:33:15.401 "num_base_bdevs_operational": 2, 00:33:15.401 "base_bdevs_list": [ 00:33:15.401 { 00:33:15.401 "name": "BaseBdev1", 00:33:15.401 "uuid": "44a51d98-b92a-49ab-bb7a-52190d1b87b5", 00:33:15.401 "is_configured": true, 00:33:15.401 "data_offset": 0, 00:33:15.401 "data_size": 65536 00:33:15.401 }, 00:33:15.401 { 00:33:15.401 "name": "BaseBdev2", 00:33:15.401 "uuid": "22339d19-0b1e-4d4f-8b5d-e5db060d5df9", 00:33:15.401 "is_configured": true, 00:33:15.401 "data_offset": 0, 00:33:15.401 "data_size": 65536 00:33:15.401 } 00:33:15.401 ] 00:33:15.401 }' 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:15.401 17:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.661 [2024-11-26 17:30:16.313994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.661 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:15.661 "name": "Existed_Raid", 00:33:15.661 "aliases": [ 00:33:15.661 "0ccc1ec7-17f0-434f-b119-f816e38cd54e" 00:33:15.661 ], 00:33:15.661 "product_name": "Raid Volume", 00:33:15.661 "block_size": 512, 00:33:15.661 "num_blocks": 65536, 00:33:15.661 "uuid": "0ccc1ec7-17f0-434f-b119-f816e38cd54e", 00:33:15.661 "assigned_rate_limits": { 00:33:15.661 "rw_ios_per_sec": 0, 00:33:15.661 "rw_mbytes_per_sec": 0, 00:33:15.661 "r_mbytes_per_sec": 0, 00:33:15.661 "w_mbytes_per_sec": 0 00:33:15.661 }, 00:33:15.661 "claimed": false, 00:33:15.661 "zoned": false, 00:33:15.661 "supported_io_types": { 00:33:15.661 "read": true, 00:33:15.661 "write": true, 00:33:15.661 "unmap": false, 00:33:15.661 "flush": false, 00:33:15.661 "reset": true, 00:33:15.661 "nvme_admin": false, 00:33:15.661 "nvme_io": false, 00:33:15.661 "nvme_io_md": false, 00:33:15.661 "write_zeroes": true, 00:33:15.661 "zcopy": false, 00:33:15.661 "get_zone_info": false, 00:33:15.661 "zone_management": false, 00:33:15.661 "zone_append": false, 00:33:15.661 "compare": false, 00:33:15.661 "compare_and_write": false, 00:33:15.661 "abort": false, 00:33:15.661 "seek_hole": false, 00:33:15.661 "seek_data": false, 00:33:15.661 "copy": false, 00:33:15.661 "nvme_iov_md": false 00:33:15.661 }, 00:33:15.661 "memory_domains": [ 00:33:15.661 { 00:33:15.661 "dma_device_id": "system", 00:33:15.661 "dma_device_type": 1 00:33:15.661 }, 00:33:15.661 { 00:33:15.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.661 "dma_device_type": 2 00:33:15.661 }, 00:33:15.661 { 00:33:15.661 "dma_device_id": "system", 00:33:15.661 "dma_device_type": 1 00:33:15.661 }, 00:33:15.661 { 00:33:15.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.661 "dma_device_type": 2 00:33:15.661 } 00:33:15.661 ], 00:33:15.661 "driver_specific": { 00:33:15.661 "raid": { 00:33:15.661 "uuid": "0ccc1ec7-17f0-434f-b119-f816e38cd54e", 00:33:15.661 "strip_size_kb": 0, 00:33:15.661 "state": "online", 00:33:15.661 "raid_level": "raid1", 00:33:15.662 "superblock": false, 00:33:15.662 "num_base_bdevs": 2, 00:33:15.662 "num_base_bdevs_discovered": 2, 00:33:15.662 "num_base_bdevs_operational": 2, 00:33:15.662 "base_bdevs_list": [ 00:33:15.662 { 00:33:15.662 "name": "BaseBdev1", 00:33:15.662 "uuid": "44a51d98-b92a-49ab-bb7a-52190d1b87b5", 00:33:15.662 "is_configured": true, 00:33:15.662 "data_offset": 0, 00:33:15.662 "data_size": 65536 00:33:15.662 }, 00:33:15.662 { 00:33:15.662 "name": "BaseBdev2", 00:33:15.662 "uuid": "22339d19-0b1e-4d4f-8b5d-e5db060d5df9", 00:33:15.662 "is_configured": true, 00:33:15.662 "data_offset": 0, 00:33:15.662 "data_size": 65536 00:33:15.662 } 00:33:15.662 ] 00:33:15.662 } 00:33:15.662 } 00:33:15.662 }' 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:15.922 BaseBdev2' 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.922 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:15.922 [2024-11-26 17:30:16.549267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.188 "name": "Existed_Raid", 00:33:16.188 "uuid": "0ccc1ec7-17f0-434f-b119-f816e38cd54e", 00:33:16.188 "strip_size_kb": 0, 00:33:16.188 "state": "online", 00:33:16.188 "raid_level": "raid1", 00:33:16.188 "superblock": false, 00:33:16.188 "num_base_bdevs": 2, 00:33:16.188 "num_base_bdevs_discovered": 1, 00:33:16.188 "num_base_bdevs_operational": 1, 00:33:16.188 "base_bdevs_list": [ 00:33:16.188 { 00:33:16.188 "name": null, 00:33:16.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.188 "is_configured": false, 00:33:16.188 "data_offset": 0, 00:33:16.188 "data_size": 65536 00:33:16.188 }, 00:33:16.188 { 00:33:16.188 "name": "BaseBdev2", 00:33:16.188 "uuid": "22339d19-0b1e-4d4f-8b5d-e5db060d5df9", 00:33:16.188 "is_configured": true, 00:33:16.188 "data_offset": 0, 00:33:16.188 "data_size": 65536 00:33:16.188 } 00:33:16.188 ] 00:33:16.188 }' 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.188 17:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:16.770 [2024-11-26 17:30:17.215712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:16.770 [2024-11-26 17:30:17.215925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:16.770 [2024-11-26 17:30:17.311797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:16.770 [2024-11-26 17:30:17.311955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:16.770 [2024-11-26 17:30:17.311975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62940 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62940 ']' 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62940 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62940 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62940' 00:33:16.770 killing process with pid 62940 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62940 00:33:16.770 [2024-11-26 17:30:17.410855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:16.770 17:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62940 00:33:16.770 [2024-11-26 17:30:17.429338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:18.153 ************************************ 00:33:18.153 END TEST raid_state_function_test 00:33:18.153 ************************************ 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:33:18.153 00:33:18.153 real 0m5.323s 00:33:18.153 user 0m7.734s 00:33:18.153 sys 0m0.859s 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:18.153 17:30:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:33:18.153 17:30:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:18.153 17:30:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:18.153 17:30:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:18.153 ************************************ 00:33:18.153 START TEST raid_state_function_test_sb 00:33:18.153 ************************************ 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63193 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63193' 00:33:18.153 Process raid pid: 63193 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63193 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63193 ']' 00:33:18.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:18.153 17:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.154 [2024-11-26 17:30:18.770766] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:18.154 [2024-11-26 17:30:18.770981] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.413 [2024-11-26 17:30:18.948881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.413 [2024-11-26 17:30:19.073007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.683 [2024-11-26 17:30:19.289352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:18.683 [2024-11-26 17:30:19.289497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.253 [2024-11-26 17:30:19.645284] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:19.253 [2024-11-26 17:30:19.645351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:19.253 [2024-11-26 17:30:19.645363] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:19.253 [2024-11-26 17:30:19.645392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:19.253 "name": "Existed_Raid", 00:33:19.253 "uuid": "b7b7edd3-856b-4dc4-b1fb-49c71be8f09d", 00:33:19.253 "strip_size_kb": 0, 00:33:19.253 "state": "configuring", 00:33:19.253 "raid_level": "raid1", 00:33:19.253 "superblock": true, 00:33:19.253 "num_base_bdevs": 2, 00:33:19.253 "num_base_bdevs_discovered": 0, 00:33:19.253 "num_base_bdevs_operational": 2, 00:33:19.253 "base_bdevs_list": [ 00:33:19.253 { 00:33:19.253 "name": "BaseBdev1", 00:33:19.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:19.253 "is_configured": false, 00:33:19.253 "data_offset": 0, 00:33:19.253 "data_size": 0 00:33:19.253 }, 00:33:19.253 { 00:33:19.253 "name": "BaseBdev2", 00:33:19.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:19.253 "is_configured": false, 00:33:19.253 "data_offset": 0, 00:33:19.253 "data_size": 0 00:33:19.253 } 00:33:19.253 ] 00:33:19.253 }' 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:19.253 17:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.581 [2024-11-26 17:30:20.072536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:19.581 [2024-11-26 17:30:20.072647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.581 [2024-11-26 17:30:20.084497] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:19.581 [2024-11-26 17:30:20.084602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:19.581 [2024-11-26 17:30:20.084634] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:19.581 [2024-11-26 17:30:20.084663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.581 [2024-11-26 17:30:20.133211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:19.581 BaseBdev1 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.581 [ 00:33:19.581 { 00:33:19.581 "name": "BaseBdev1", 00:33:19.581 "aliases": [ 00:33:19.581 "6ea50196-4b9d-4a7d-8974-c84da86de395" 00:33:19.581 ], 00:33:19.581 "product_name": "Malloc disk", 00:33:19.581 "block_size": 512, 00:33:19.581 "num_blocks": 65536, 00:33:19.581 "uuid": "6ea50196-4b9d-4a7d-8974-c84da86de395", 00:33:19.581 "assigned_rate_limits": { 00:33:19.581 "rw_ios_per_sec": 0, 00:33:19.581 "rw_mbytes_per_sec": 0, 00:33:19.581 "r_mbytes_per_sec": 0, 00:33:19.581 "w_mbytes_per_sec": 0 00:33:19.581 }, 00:33:19.581 "claimed": true, 00:33:19.581 "claim_type": "exclusive_write", 00:33:19.581 "zoned": false, 00:33:19.581 "supported_io_types": { 00:33:19.581 "read": true, 00:33:19.581 "write": true, 00:33:19.581 "unmap": true, 00:33:19.581 "flush": true, 00:33:19.581 "reset": true, 00:33:19.581 "nvme_admin": false, 00:33:19.581 "nvme_io": false, 00:33:19.581 "nvme_io_md": false, 00:33:19.581 "write_zeroes": true, 00:33:19.581 "zcopy": true, 00:33:19.581 "get_zone_info": false, 00:33:19.581 "zone_management": false, 00:33:19.581 "zone_append": false, 00:33:19.581 "compare": false, 00:33:19.581 "compare_and_write": false, 00:33:19.581 "abort": true, 00:33:19.581 "seek_hole": false, 00:33:19.581 "seek_data": false, 00:33:19.581 "copy": true, 00:33:19.581 "nvme_iov_md": false 00:33:19.581 }, 00:33:19.581 "memory_domains": [ 00:33:19.581 { 00:33:19.581 "dma_device_id": "system", 00:33:19.581 "dma_device_type": 1 00:33:19.581 }, 00:33:19.581 { 00:33:19.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.581 "dma_device_type": 2 00:33:19.581 } 00:33:19.581 ], 00:33:19.581 "driver_specific": {} 00:33:19.581 } 00:33:19.581 ] 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:19.581 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:19.582 "name": "Existed_Raid", 00:33:19.582 "uuid": "56504148-dac6-47a3-8725-31dcd39fd23a", 00:33:19.582 "strip_size_kb": 0, 00:33:19.582 "state": "configuring", 00:33:19.582 "raid_level": "raid1", 00:33:19.582 "superblock": true, 00:33:19.582 "num_base_bdevs": 2, 00:33:19.582 "num_base_bdevs_discovered": 1, 00:33:19.582 "num_base_bdevs_operational": 2, 00:33:19.582 "base_bdevs_list": [ 00:33:19.582 { 00:33:19.582 "name": "BaseBdev1", 00:33:19.582 "uuid": "6ea50196-4b9d-4a7d-8974-c84da86de395", 00:33:19.582 "is_configured": true, 00:33:19.582 "data_offset": 2048, 00:33:19.582 "data_size": 63488 00:33:19.582 }, 00:33:19.582 { 00:33:19.582 "name": "BaseBdev2", 00:33:19.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:19.582 "is_configured": false, 00:33:19.582 "data_offset": 0, 00:33:19.582 "data_size": 0 00:33:19.582 } 00:33:19.582 ] 00:33:19.582 }' 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:19.582 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.151 [2024-11-26 17:30:20.608482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:20.151 [2024-11-26 17:30:20.608568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.151 [2024-11-26 17:30:20.616504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:20.151 [2024-11-26 17:30:20.618640] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:20.151 [2024-11-26 17:30:20.618690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:20.151 "name": "Existed_Raid", 00:33:20.151 "uuid": "b546156b-98fb-4034-a198-2f4e1c7c86cc", 00:33:20.151 "strip_size_kb": 0, 00:33:20.151 "state": "configuring", 00:33:20.151 "raid_level": "raid1", 00:33:20.151 "superblock": true, 00:33:20.151 "num_base_bdevs": 2, 00:33:20.151 "num_base_bdevs_discovered": 1, 00:33:20.151 "num_base_bdevs_operational": 2, 00:33:20.151 "base_bdevs_list": [ 00:33:20.151 { 00:33:20.151 "name": "BaseBdev1", 00:33:20.151 "uuid": "6ea50196-4b9d-4a7d-8974-c84da86de395", 00:33:20.151 "is_configured": true, 00:33:20.151 "data_offset": 2048, 00:33:20.151 "data_size": 63488 00:33:20.151 }, 00:33:20.151 { 00:33:20.151 "name": "BaseBdev2", 00:33:20.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.151 "is_configured": false, 00:33:20.151 "data_offset": 0, 00:33:20.151 "data_size": 0 00:33:20.151 } 00:33:20.151 ] 00:33:20.151 }' 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:20.151 17:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.412 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:20.412 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.412 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.672 [2024-11-26 17:30:21.131084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:20.672 [2024-11-26 17:30:21.131363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:20.672 [2024-11-26 17:30:21.131380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:20.672 [2024-11-26 17:30:21.131732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:20.672 [2024-11-26 17:30:21.131914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:20.672 [2024-11-26 17:30:21.131932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:20.672 BaseBdev2 00:33:20.672 [2024-11-26 17:30:21.132096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.672 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.672 [ 00:33:20.672 { 00:33:20.672 "name": "BaseBdev2", 00:33:20.672 "aliases": [ 00:33:20.672 "33598d25-8c79-48ae-8215-02b29ea936a1" 00:33:20.673 ], 00:33:20.673 "product_name": "Malloc disk", 00:33:20.673 "block_size": 512, 00:33:20.673 "num_blocks": 65536, 00:33:20.673 "uuid": "33598d25-8c79-48ae-8215-02b29ea936a1", 00:33:20.673 "assigned_rate_limits": { 00:33:20.673 "rw_ios_per_sec": 0, 00:33:20.673 "rw_mbytes_per_sec": 0, 00:33:20.673 "r_mbytes_per_sec": 0, 00:33:20.673 "w_mbytes_per_sec": 0 00:33:20.673 }, 00:33:20.673 "claimed": true, 00:33:20.673 "claim_type": "exclusive_write", 00:33:20.673 "zoned": false, 00:33:20.673 "supported_io_types": { 00:33:20.673 "read": true, 00:33:20.673 "write": true, 00:33:20.673 "unmap": true, 00:33:20.673 "flush": true, 00:33:20.673 "reset": true, 00:33:20.673 "nvme_admin": false, 00:33:20.673 "nvme_io": false, 00:33:20.673 "nvme_io_md": false, 00:33:20.673 "write_zeroes": true, 00:33:20.673 "zcopy": true, 00:33:20.673 "get_zone_info": false, 00:33:20.673 "zone_management": false, 00:33:20.673 "zone_append": false, 00:33:20.673 "compare": false, 00:33:20.673 "compare_and_write": false, 00:33:20.673 "abort": true, 00:33:20.673 "seek_hole": false, 00:33:20.673 "seek_data": false, 00:33:20.673 "copy": true, 00:33:20.673 "nvme_iov_md": false 00:33:20.673 }, 00:33:20.673 "memory_domains": [ 00:33:20.673 { 00:33:20.673 "dma_device_id": "system", 00:33:20.673 "dma_device_type": 1 00:33:20.673 }, 00:33:20.673 { 00:33:20.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:20.673 "dma_device_type": 2 00:33:20.673 } 00:33:20.673 ], 00:33:20.673 "driver_specific": {} 00:33:20.673 } 00:33:20.673 ] 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:20.673 "name": "Existed_Raid", 00:33:20.673 "uuid": "b546156b-98fb-4034-a198-2f4e1c7c86cc", 00:33:20.673 "strip_size_kb": 0, 00:33:20.673 "state": "online", 00:33:20.673 "raid_level": "raid1", 00:33:20.673 "superblock": true, 00:33:20.673 "num_base_bdevs": 2, 00:33:20.673 "num_base_bdevs_discovered": 2, 00:33:20.673 "num_base_bdevs_operational": 2, 00:33:20.673 "base_bdevs_list": [ 00:33:20.673 { 00:33:20.673 "name": "BaseBdev1", 00:33:20.673 "uuid": "6ea50196-4b9d-4a7d-8974-c84da86de395", 00:33:20.673 "is_configured": true, 00:33:20.673 "data_offset": 2048, 00:33:20.673 "data_size": 63488 00:33:20.673 }, 00:33:20.673 { 00:33:20.673 "name": "BaseBdev2", 00:33:20.673 "uuid": "33598d25-8c79-48ae-8215-02b29ea936a1", 00:33:20.673 "is_configured": true, 00:33:20.673 "data_offset": 2048, 00:33:20.673 "data_size": 63488 00:33:20.673 } 00:33:20.673 ] 00:33:20.673 }' 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:20.673 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.933 [2024-11-26 17:30:21.546767] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.933 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:20.933 "name": "Existed_Raid", 00:33:20.933 "aliases": [ 00:33:20.933 "b546156b-98fb-4034-a198-2f4e1c7c86cc" 00:33:20.933 ], 00:33:20.933 "product_name": "Raid Volume", 00:33:20.933 "block_size": 512, 00:33:20.933 "num_blocks": 63488, 00:33:20.933 "uuid": "b546156b-98fb-4034-a198-2f4e1c7c86cc", 00:33:20.933 "assigned_rate_limits": { 00:33:20.933 "rw_ios_per_sec": 0, 00:33:20.933 "rw_mbytes_per_sec": 0, 00:33:20.933 "r_mbytes_per_sec": 0, 00:33:20.933 "w_mbytes_per_sec": 0 00:33:20.933 }, 00:33:20.933 "claimed": false, 00:33:20.933 "zoned": false, 00:33:20.933 "supported_io_types": { 00:33:20.933 "read": true, 00:33:20.933 "write": true, 00:33:20.933 "unmap": false, 00:33:20.933 "flush": false, 00:33:20.933 "reset": true, 00:33:20.933 "nvme_admin": false, 00:33:20.933 "nvme_io": false, 00:33:20.933 "nvme_io_md": false, 00:33:20.933 "write_zeroes": true, 00:33:20.933 "zcopy": false, 00:33:20.933 "get_zone_info": false, 00:33:20.933 "zone_management": false, 00:33:20.933 "zone_append": false, 00:33:20.933 "compare": false, 00:33:20.933 "compare_and_write": false, 00:33:20.933 "abort": false, 00:33:20.933 "seek_hole": false, 00:33:20.933 "seek_data": false, 00:33:20.933 "copy": false, 00:33:20.933 "nvme_iov_md": false 00:33:20.933 }, 00:33:20.933 "memory_domains": [ 00:33:20.933 { 00:33:20.933 "dma_device_id": "system", 00:33:20.933 "dma_device_type": 1 00:33:20.933 }, 00:33:20.933 { 00:33:20.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:20.933 "dma_device_type": 2 00:33:20.933 }, 00:33:20.933 { 00:33:20.933 "dma_device_id": "system", 00:33:20.933 "dma_device_type": 1 00:33:20.933 }, 00:33:20.933 { 00:33:20.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:20.933 "dma_device_type": 2 00:33:20.933 } 00:33:20.933 ], 00:33:20.933 "driver_specific": { 00:33:20.933 "raid": { 00:33:20.933 "uuid": "b546156b-98fb-4034-a198-2f4e1c7c86cc", 00:33:20.933 "strip_size_kb": 0, 00:33:20.933 "state": "online", 00:33:20.933 "raid_level": "raid1", 00:33:20.933 "superblock": true, 00:33:20.933 "num_base_bdevs": 2, 00:33:20.934 "num_base_bdevs_discovered": 2, 00:33:20.934 "num_base_bdevs_operational": 2, 00:33:20.934 "base_bdevs_list": [ 00:33:20.934 { 00:33:20.934 "name": "BaseBdev1", 00:33:20.934 "uuid": "6ea50196-4b9d-4a7d-8974-c84da86de395", 00:33:20.934 "is_configured": true, 00:33:20.934 "data_offset": 2048, 00:33:20.934 "data_size": 63488 00:33:20.934 }, 00:33:20.934 { 00:33:20.934 "name": "BaseBdev2", 00:33:20.934 "uuid": "33598d25-8c79-48ae-8215-02b29ea936a1", 00:33:20.934 "is_configured": true, 00:33:20.934 "data_offset": 2048, 00:33:20.934 "data_size": 63488 00:33:20.934 } 00:33:20.934 ] 00:33:20.934 } 00:33:20.934 } 00:33:20.934 }' 00:33:20.934 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:20.934 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:20.934 BaseBdev2' 00:33:20.934 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.193 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:21.193 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:21.193 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:21.193 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.193 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.193 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.193 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.194 [2024-11-26 17:30:21.754126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.194 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.453 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:21.453 "name": "Existed_Raid", 00:33:21.453 "uuid": "b546156b-98fb-4034-a198-2f4e1c7c86cc", 00:33:21.453 "strip_size_kb": 0, 00:33:21.453 "state": "online", 00:33:21.453 "raid_level": "raid1", 00:33:21.453 "superblock": true, 00:33:21.453 "num_base_bdevs": 2, 00:33:21.453 "num_base_bdevs_discovered": 1, 00:33:21.453 "num_base_bdevs_operational": 1, 00:33:21.453 "base_bdevs_list": [ 00:33:21.453 { 00:33:21.453 "name": null, 00:33:21.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.453 "is_configured": false, 00:33:21.453 "data_offset": 0, 00:33:21.453 "data_size": 63488 00:33:21.454 }, 00:33:21.454 { 00:33:21.454 "name": "BaseBdev2", 00:33:21.454 "uuid": "33598d25-8c79-48ae-8215-02b29ea936a1", 00:33:21.454 "is_configured": true, 00:33:21.454 "data_offset": 2048, 00:33:21.454 "data_size": 63488 00:33:21.454 } 00:33:21.454 ] 00:33:21.454 }' 00:33:21.454 17:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:21.454 17:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.713 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.713 [2024-11-26 17:30:22.343721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:21.713 [2024-11-26 17:30:22.343906] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:21.973 [2024-11-26 17:30:22.449715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:21.973 [2024-11-26 17:30:22.449806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:21.973 [2024-11-26 17:30:22.449824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63193 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63193 ']' 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63193 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63193 00:33:21.973 killing process with pid 63193 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63193' 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63193 00:33:21.973 [2024-11-26 17:30:22.522174] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:21.973 17:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63193 00:33:21.973 [2024-11-26 17:30:22.539977] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:23.353 17:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:33:23.353 00:33:23.353 real 0m5.018s 00:33:23.353 user 0m7.208s 00:33:23.353 sys 0m0.757s 00:33:23.353 17:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.353 ************************************ 00:33:23.353 END TEST raid_state_function_test_sb 00:33:23.353 ************************************ 00:33:23.353 17:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:23.353 17:30:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:33:23.353 17:30:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:23.353 17:30:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.353 17:30:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:23.353 ************************************ 00:33:23.353 START TEST raid_superblock_test 00:33:23.353 ************************************ 00:33:23.353 17:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63445 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63445 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63445 ']' 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.354 17:30:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.354 [2024-11-26 17:30:23.842385] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:23.354 [2024-11-26 17:30:23.842598] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63445 ] 00:33:23.354 [2024-11-26 17:30:24.014031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.613 [2024-11-26 17:30:24.135371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.888 [2024-11-26 17:30:24.333497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:23.888 [2024-11-26 17:30:24.333631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.148 malloc1 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.148 [2024-11-26 17:30:24.780298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:24.148 [2024-11-26 17:30:24.780450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:24.148 [2024-11-26 17:30:24.780480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:24.148 [2024-11-26 17:30:24.780491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:24.148 [2024-11-26 17:30:24.782887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:24.148 [2024-11-26 17:30:24.782931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:24.148 pt1 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.148 malloc2 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.148 [2024-11-26 17:30:24.834777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:24.148 [2024-11-26 17:30:24.834896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:24.148 [2024-11-26 17:30:24.834941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:24.148 [2024-11-26 17:30:24.834972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:24.148 [2024-11-26 17:30:24.837156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:24.148 [2024-11-26 17:30:24.837230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:24.148 pt2 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:24.148 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.407 [2024-11-26 17:30:24.846797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:24.407 [2024-11-26 17:30:24.848688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:24.407 [2024-11-26 17:30:24.848892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:24.407 [2024-11-26 17:30:24.848942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:24.407 [2024-11-26 17:30:24.849218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:24.407 [2024-11-26 17:30:24.849409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:24.407 [2024-11-26 17:30:24.849455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:24.407 [2024-11-26 17:30:24.849653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:24.407 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:24.408 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:24.408 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:24.408 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:24.408 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.408 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.408 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.408 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.408 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:24.408 "name": "raid_bdev1", 00:33:24.408 "uuid": "4e00519a-414e-416c-a0e8-d0f24c69eb9f", 00:33:24.408 "strip_size_kb": 0, 00:33:24.408 "state": "online", 00:33:24.408 "raid_level": "raid1", 00:33:24.408 "superblock": true, 00:33:24.408 "num_base_bdevs": 2, 00:33:24.408 "num_base_bdevs_discovered": 2, 00:33:24.408 "num_base_bdevs_operational": 2, 00:33:24.408 "base_bdevs_list": [ 00:33:24.408 { 00:33:24.408 "name": "pt1", 00:33:24.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:24.408 "is_configured": true, 00:33:24.408 "data_offset": 2048, 00:33:24.408 "data_size": 63488 00:33:24.408 }, 00:33:24.408 { 00:33:24.408 "name": "pt2", 00:33:24.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:24.408 "is_configured": true, 00:33:24.408 "data_offset": 2048, 00:33:24.408 "data_size": 63488 00:33:24.408 } 00:33:24.408 ] 00:33:24.408 }' 00:33:24.408 17:30:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:24.408 17:30:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.668 [2024-11-26 17:30:25.306318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:24.668 "name": "raid_bdev1", 00:33:24.668 "aliases": [ 00:33:24.668 "4e00519a-414e-416c-a0e8-d0f24c69eb9f" 00:33:24.668 ], 00:33:24.668 "product_name": "Raid Volume", 00:33:24.668 "block_size": 512, 00:33:24.668 "num_blocks": 63488, 00:33:24.668 "uuid": "4e00519a-414e-416c-a0e8-d0f24c69eb9f", 00:33:24.668 "assigned_rate_limits": { 00:33:24.668 "rw_ios_per_sec": 0, 00:33:24.668 "rw_mbytes_per_sec": 0, 00:33:24.668 "r_mbytes_per_sec": 0, 00:33:24.668 "w_mbytes_per_sec": 0 00:33:24.668 }, 00:33:24.668 "claimed": false, 00:33:24.668 "zoned": false, 00:33:24.668 "supported_io_types": { 00:33:24.668 "read": true, 00:33:24.668 "write": true, 00:33:24.668 "unmap": false, 00:33:24.668 "flush": false, 00:33:24.668 "reset": true, 00:33:24.668 "nvme_admin": false, 00:33:24.668 "nvme_io": false, 00:33:24.668 "nvme_io_md": false, 00:33:24.668 "write_zeroes": true, 00:33:24.668 "zcopy": false, 00:33:24.668 "get_zone_info": false, 00:33:24.668 "zone_management": false, 00:33:24.668 "zone_append": false, 00:33:24.668 "compare": false, 00:33:24.668 "compare_and_write": false, 00:33:24.668 "abort": false, 00:33:24.668 "seek_hole": false, 00:33:24.668 "seek_data": false, 00:33:24.668 "copy": false, 00:33:24.668 "nvme_iov_md": false 00:33:24.668 }, 00:33:24.668 "memory_domains": [ 00:33:24.668 { 00:33:24.668 "dma_device_id": "system", 00:33:24.668 "dma_device_type": 1 00:33:24.668 }, 00:33:24.668 { 00:33:24.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:24.668 "dma_device_type": 2 00:33:24.668 }, 00:33:24.668 { 00:33:24.668 "dma_device_id": "system", 00:33:24.668 "dma_device_type": 1 00:33:24.668 }, 00:33:24.668 { 00:33:24.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:24.668 "dma_device_type": 2 00:33:24.668 } 00:33:24.668 ], 00:33:24.668 "driver_specific": { 00:33:24.668 "raid": { 00:33:24.668 "uuid": "4e00519a-414e-416c-a0e8-d0f24c69eb9f", 00:33:24.668 "strip_size_kb": 0, 00:33:24.668 "state": "online", 00:33:24.668 "raid_level": "raid1", 00:33:24.668 "superblock": true, 00:33:24.668 "num_base_bdevs": 2, 00:33:24.668 "num_base_bdevs_discovered": 2, 00:33:24.668 "num_base_bdevs_operational": 2, 00:33:24.668 "base_bdevs_list": [ 00:33:24.668 { 00:33:24.668 "name": "pt1", 00:33:24.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:24.668 "is_configured": true, 00:33:24.668 "data_offset": 2048, 00:33:24.668 "data_size": 63488 00:33:24.668 }, 00:33:24.668 { 00:33:24.668 "name": "pt2", 00:33:24.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:24.668 "is_configured": true, 00:33:24.668 "data_offset": 2048, 00:33:24.668 "data_size": 63488 00:33:24.668 } 00:33:24.668 ] 00:33:24.668 } 00:33:24.668 } 00:33:24.668 }' 00:33:24.668 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:24.928 pt2' 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:24.928 [2024-11-26 17:30:25.537918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4e00519a-414e-416c-a0e8-d0f24c69eb9f 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4e00519a-414e-416c-a0e8-d0f24c69eb9f ']' 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.928 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.928 [2024-11-26 17:30:25.589554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:24.929 [2024-11-26 17:30:25.589585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:24.929 [2024-11-26 17:30:25.589695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:24.929 [2024-11-26 17:30:25.589761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:24.929 [2024-11-26 17:30:25.589775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:24.929 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.929 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:24.929 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:24.929 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.929 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.929 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.189 [2024-11-26 17:30:25.725319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:25.189 [2024-11-26 17:30:25.727417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:25.189 [2024-11-26 17:30:25.727592] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:25.189 [2024-11-26 17:30:25.727718] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:25.189 [2024-11-26 17:30:25.727790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:25.189 [2024-11-26 17:30:25.727830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:25.189 request: 00:33:25.189 { 00:33:25.189 "name": "raid_bdev1", 00:33:25.189 "raid_level": "raid1", 00:33:25.189 "base_bdevs": [ 00:33:25.189 "malloc1", 00:33:25.189 "malloc2" 00:33:25.189 ], 00:33:25.189 "superblock": false, 00:33:25.189 "method": "bdev_raid_create", 00:33:25.189 "req_id": 1 00:33:25.189 } 00:33:25.189 Got JSON-RPC error response 00:33:25.189 response: 00:33:25.189 { 00:33:25.189 "code": -17, 00:33:25.189 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:25.189 } 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.189 [2024-11-26 17:30:25.777207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:25.189 [2024-11-26 17:30:25.777317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:25.189 [2024-11-26 17:30:25.777364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:25.189 [2024-11-26 17:30:25.777396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:25.189 [2024-11-26 17:30:25.779643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:25.189 [2024-11-26 17:30:25.779716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:25.189 [2024-11-26 17:30:25.779835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:25.189 [2024-11-26 17:30:25.779935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:25.189 pt1 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.189 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:25.189 "name": "raid_bdev1", 00:33:25.189 "uuid": "4e00519a-414e-416c-a0e8-d0f24c69eb9f", 00:33:25.189 "strip_size_kb": 0, 00:33:25.189 "state": "configuring", 00:33:25.189 "raid_level": "raid1", 00:33:25.189 "superblock": true, 00:33:25.189 "num_base_bdevs": 2, 00:33:25.189 "num_base_bdevs_discovered": 1, 00:33:25.189 "num_base_bdevs_operational": 2, 00:33:25.189 "base_bdevs_list": [ 00:33:25.189 { 00:33:25.189 "name": "pt1", 00:33:25.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:25.189 "is_configured": true, 00:33:25.189 "data_offset": 2048, 00:33:25.189 "data_size": 63488 00:33:25.189 }, 00:33:25.189 { 00:33:25.189 "name": null, 00:33:25.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:25.189 "is_configured": false, 00:33:25.189 "data_offset": 2048, 00:33:25.190 "data_size": 63488 00:33:25.190 } 00:33:25.190 ] 00:33:25.190 }' 00:33:25.190 17:30:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:25.190 17:30:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.759 [2024-11-26 17:30:26.184531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:25.759 [2024-11-26 17:30:26.184621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:25.759 [2024-11-26 17:30:26.184645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:25.759 [2024-11-26 17:30:26.184657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:25.759 [2024-11-26 17:30:26.185110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:25.759 [2024-11-26 17:30:26.185131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:25.759 [2024-11-26 17:30:26.185215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:25.759 [2024-11-26 17:30:26.185241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:25.759 [2024-11-26 17:30:26.185351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:25.759 [2024-11-26 17:30:26.185362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:25.759 [2024-11-26 17:30:26.185626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:25.759 [2024-11-26 17:30:26.185781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:25.759 [2024-11-26 17:30:26.185790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:25.759 [2024-11-26 17:30:26.185941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:25.759 pt2 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:25.759 "name": "raid_bdev1", 00:33:25.759 "uuid": "4e00519a-414e-416c-a0e8-d0f24c69eb9f", 00:33:25.759 "strip_size_kb": 0, 00:33:25.759 "state": "online", 00:33:25.759 "raid_level": "raid1", 00:33:25.759 "superblock": true, 00:33:25.759 "num_base_bdevs": 2, 00:33:25.759 "num_base_bdevs_discovered": 2, 00:33:25.759 "num_base_bdevs_operational": 2, 00:33:25.759 "base_bdevs_list": [ 00:33:25.759 { 00:33:25.759 "name": "pt1", 00:33:25.759 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:25.759 "is_configured": true, 00:33:25.759 "data_offset": 2048, 00:33:25.759 "data_size": 63488 00:33:25.759 }, 00:33:25.759 { 00:33:25.759 "name": "pt2", 00:33:25.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:25.759 "is_configured": true, 00:33:25.759 "data_offset": 2048, 00:33:25.759 "data_size": 63488 00:33:25.759 } 00:33:25.759 ] 00:33:25.759 }' 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:25.759 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:26.019 [2024-11-26 17:30:26.656013] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:26.019 "name": "raid_bdev1", 00:33:26.019 "aliases": [ 00:33:26.019 "4e00519a-414e-416c-a0e8-d0f24c69eb9f" 00:33:26.019 ], 00:33:26.019 "product_name": "Raid Volume", 00:33:26.019 "block_size": 512, 00:33:26.019 "num_blocks": 63488, 00:33:26.019 "uuid": "4e00519a-414e-416c-a0e8-d0f24c69eb9f", 00:33:26.019 "assigned_rate_limits": { 00:33:26.019 "rw_ios_per_sec": 0, 00:33:26.019 "rw_mbytes_per_sec": 0, 00:33:26.019 "r_mbytes_per_sec": 0, 00:33:26.019 "w_mbytes_per_sec": 0 00:33:26.019 }, 00:33:26.019 "claimed": false, 00:33:26.019 "zoned": false, 00:33:26.019 "supported_io_types": { 00:33:26.019 "read": true, 00:33:26.019 "write": true, 00:33:26.019 "unmap": false, 00:33:26.019 "flush": false, 00:33:26.019 "reset": true, 00:33:26.019 "nvme_admin": false, 00:33:26.019 "nvme_io": false, 00:33:26.019 "nvme_io_md": false, 00:33:26.019 "write_zeroes": true, 00:33:26.019 "zcopy": false, 00:33:26.019 "get_zone_info": false, 00:33:26.019 "zone_management": false, 00:33:26.019 "zone_append": false, 00:33:26.019 "compare": false, 00:33:26.019 "compare_and_write": false, 00:33:26.019 "abort": false, 00:33:26.019 "seek_hole": false, 00:33:26.019 "seek_data": false, 00:33:26.019 "copy": false, 00:33:26.019 "nvme_iov_md": false 00:33:26.019 }, 00:33:26.019 "memory_domains": [ 00:33:26.019 { 00:33:26.019 "dma_device_id": "system", 00:33:26.019 "dma_device_type": 1 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:26.019 "dma_device_type": 2 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "dma_device_id": "system", 00:33:26.019 "dma_device_type": 1 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:26.019 "dma_device_type": 2 00:33:26.019 } 00:33:26.019 ], 00:33:26.019 "driver_specific": { 00:33:26.019 "raid": { 00:33:26.019 "uuid": "4e00519a-414e-416c-a0e8-d0f24c69eb9f", 00:33:26.019 "strip_size_kb": 0, 00:33:26.019 "state": "online", 00:33:26.019 "raid_level": "raid1", 00:33:26.019 "superblock": true, 00:33:26.019 "num_base_bdevs": 2, 00:33:26.019 "num_base_bdevs_discovered": 2, 00:33:26.019 "num_base_bdevs_operational": 2, 00:33:26.019 "base_bdevs_list": [ 00:33:26.019 { 00:33:26.019 "name": "pt1", 00:33:26.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:26.019 "is_configured": true, 00:33:26.019 "data_offset": 2048, 00:33:26.019 "data_size": 63488 00:33:26.019 }, 00:33:26.019 { 00:33:26.019 "name": "pt2", 00:33:26.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:26.019 "is_configured": true, 00:33:26.019 "data_offset": 2048, 00:33:26.019 "data_size": 63488 00:33:26.019 } 00:33:26.019 ] 00:33:26.019 } 00:33:26.019 } 00:33:26.019 }' 00:33:26.019 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:26.278 pt2' 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:26.278 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.279 [2024-11-26 17:30:26.867964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4e00519a-414e-416c-a0e8-d0f24c69eb9f '!=' 4e00519a-414e-416c-a0e8-d0f24c69eb9f ']' 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.279 [2024-11-26 17:30:26.895716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:26.279 "name": "raid_bdev1", 00:33:26.279 "uuid": "4e00519a-414e-416c-a0e8-d0f24c69eb9f", 00:33:26.279 "strip_size_kb": 0, 00:33:26.279 "state": "online", 00:33:26.279 "raid_level": "raid1", 00:33:26.279 "superblock": true, 00:33:26.279 "num_base_bdevs": 2, 00:33:26.279 "num_base_bdevs_discovered": 1, 00:33:26.279 "num_base_bdevs_operational": 1, 00:33:26.279 "base_bdevs_list": [ 00:33:26.279 { 00:33:26.279 "name": null, 00:33:26.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:26.279 "is_configured": false, 00:33:26.279 "data_offset": 0, 00:33:26.279 "data_size": 63488 00:33:26.279 }, 00:33:26.279 { 00:33:26.279 "name": "pt2", 00:33:26.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:26.279 "is_configured": true, 00:33:26.279 "data_offset": 2048, 00:33:26.279 "data_size": 63488 00:33:26.279 } 00:33:26.279 ] 00:33:26.279 }' 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:26.279 17:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.848 [2024-11-26 17:30:27.303662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:26.848 [2024-11-26 17:30:27.303765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:26.848 [2024-11-26 17:30:27.303855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:26.848 [2024-11-26 17:30:27.303905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:26.848 [2024-11-26 17:30:27.303916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.848 [2024-11-26 17:30:27.375658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:26.848 [2024-11-26 17:30:27.375724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.848 [2024-11-26 17:30:27.375742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:26.848 [2024-11-26 17:30:27.375753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.848 [2024-11-26 17:30:27.377976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.848 [2024-11-26 17:30:27.378018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:26.848 [2024-11-26 17:30:27.378102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:26.848 [2024-11-26 17:30:27.378153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:26.848 [2024-11-26 17:30:27.378253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:26.848 [2024-11-26 17:30:27.378270] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:26.848 [2024-11-26 17:30:27.378501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:26.848 [2024-11-26 17:30:27.378683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:26.848 [2024-11-26 17:30:27.378694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:26.848 [2024-11-26 17:30:27.378842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:26.848 pt2 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.848 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:26.849 "name": "raid_bdev1", 00:33:26.849 "uuid": "4e00519a-414e-416c-a0e8-d0f24c69eb9f", 00:33:26.849 "strip_size_kb": 0, 00:33:26.849 "state": "online", 00:33:26.849 "raid_level": "raid1", 00:33:26.849 "superblock": true, 00:33:26.849 "num_base_bdevs": 2, 00:33:26.849 "num_base_bdevs_discovered": 1, 00:33:26.849 "num_base_bdevs_operational": 1, 00:33:26.849 "base_bdevs_list": [ 00:33:26.849 { 00:33:26.849 "name": null, 00:33:26.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:26.849 "is_configured": false, 00:33:26.849 "data_offset": 2048, 00:33:26.849 "data_size": 63488 00:33:26.849 }, 00:33:26.849 { 00:33:26.849 "name": "pt2", 00:33:26.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:26.849 "is_configured": true, 00:33:26.849 "data_offset": 2048, 00:33:26.849 "data_size": 63488 00:33:26.849 } 00:33:26.849 ] 00:33:26.849 }' 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:26.849 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.108 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:27.108 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.108 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.108 [2024-11-26 17:30:27.779649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:27.108 [2024-11-26 17:30:27.779735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:27.108 [2024-11-26 17:30:27.779845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:27.108 [2024-11-26 17:30:27.779927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:27.108 [2024-11-26 17:30:27.779992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:27.108 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.108 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:33:27.108 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.108 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.108 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.108 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.369 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:33:27.369 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:33:27.369 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:33:27.369 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:27.369 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.369 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.369 [2024-11-26 17:30:27.839667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:27.369 [2024-11-26 17:30:27.839733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:27.369 [2024-11-26 17:30:27.839757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:27.369 [2024-11-26 17:30:27.839767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:27.369 [2024-11-26 17:30:27.842070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:27.369 [2024-11-26 17:30:27.842147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:27.369 [2024-11-26 17:30:27.842263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:27.369 [2024-11-26 17:30:27.842314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:27.369 [2024-11-26 17:30:27.842480] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:27.369 [2024-11-26 17:30:27.842493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:27.369 [2024-11-26 17:30:27.842510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:33:27.369 [2024-11-26 17:30:27.842606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:27.370 [2024-11-26 17:30:27.842681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:33:27.370 [2024-11-26 17:30:27.842689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:27.370 [2024-11-26 17:30:27.842935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:27.370 [2024-11-26 17:30:27.843085] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:33:27.370 [2024-11-26 17:30:27.843098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:33:27.370 [2024-11-26 17:30:27.843259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:27.370 pt1 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:27.370 "name": "raid_bdev1", 00:33:27.370 "uuid": "4e00519a-414e-416c-a0e8-d0f24c69eb9f", 00:33:27.370 "strip_size_kb": 0, 00:33:27.370 "state": "online", 00:33:27.370 "raid_level": "raid1", 00:33:27.370 "superblock": true, 00:33:27.370 "num_base_bdevs": 2, 00:33:27.370 "num_base_bdevs_discovered": 1, 00:33:27.370 "num_base_bdevs_operational": 1, 00:33:27.370 "base_bdevs_list": [ 00:33:27.370 { 00:33:27.370 "name": null, 00:33:27.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:27.370 "is_configured": false, 00:33:27.370 "data_offset": 2048, 00:33:27.370 "data_size": 63488 00:33:27.370 }, 00:33:27.370 { 00:33:27.370 "name": "pt2", 00:33:27.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:27.370 "is_configured": true, 00:33:27.370 "data_offset": 2048, 00:33:27.370 "data_size": 63488 00:33:27.370 } 00:33:27.370 ] 00:33:27.370 }' 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:27.370 17:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.631 17:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:27.631 17:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:33:27.631 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.631 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.891 [2024-11-26 17:30:28.347861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4e00519a-414e-416c-a0e8-d0f24c69eb9f '!=' 4e00519a-414e-416c-a0e8-d0f24c69eb9f ']' 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63445 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63445 ']' 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63445 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63445 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63445' 00:33:27.891 killing process with pid 63445 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63445 00:33:27.891 [2024-11-26 17:30:28.430397] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:27.891 [2024-11-26 17:30:28.430562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:27.891 17:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63445 00:33:27.891 [2024-11-26 17:30:28.430644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:27.891 [2024-11-26 17:30:28.430662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:33:28.150 [2024-11-26 17:30:28.637570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:29.527 17:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:33:29.527 00:33:29.527 real 0m6.045s 00:33:29.527 user 0m9.163s 00:33:29.527 sys 0m1.004s 00:33:29.527 ************************************ 00:33:29.527 END TEST raid_superblock_test 00:33:29.527 ************************************ 00:33:29.527 17:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.527 17:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.527 17:30:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:33:29.527 17:30:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:29.527 17:30:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:29.527 17:30:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:29.527 ************************************ 00:33:29.527 START TEST raid_read_error_test 00:33:29.527 ************************************ 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SaCpale3dd 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63774 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63774 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63774 ']' 00:33:29.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:29.527 17:30:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.527 [2024-11-26 17:30:29.973944] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:29.527 [2024-11-26 17:30:29.974061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63774 ] 00:33:29.527 [2024-11-26 17:30:30.149559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.787 [2024-11-26 17:30:30.273460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.045 [2024-11-26 17:30:30.488037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:30.045 [2024-11-26 17:30:30.488220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.306 BaseBdev1_malloc 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.306 true 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.306 [2024-11-26 17:30:30.882577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:33:30.306 [2024-11-26 17:30:30.882632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:30.306 [2024-11-26 17:30:30.882653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:30.306 [2024-11-26 17:30:30.882663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:30.306 [2024-11-26 17:30:30.884765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:30.306 [2024-11-26 17:30:30.884807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:30.306 BaseBdev1 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.306 BaseBdev2_malloc 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.306 true 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.306 [2024-11-26 17:30:30.946467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:30.306 [2024-11-26 17:30:30.946533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:30.306 [2024-11-26 17:30:30.946551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:30.306 [2024-11-26 17:30:30.946561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:30.306 [2024-11-26 17:30:30.948865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:30.306 [2024-11-26 17:30:30.948909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:30.306 BaseBdev2 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.306 [2024-11-26 17:30:30.958524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:30.306 [2024-11-26 17:30:30.960607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:30.306 [2024-11-26 17:30:30.960827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:30.306 [2024-11-26 17:30:30.960845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:30.306 [2024-11-26 17:30:30.961113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:30.306 [2024-11-26 17:30:30.961314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:30.306 [2024-11-26 17:30:30.961333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:30.306 [2024-11-26 17:30:30.961497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.306 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.307 17:30:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.566 17:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:30.566 "name": "raid_bdev1", 00:33:30.566 "uuid": "92879466-025e-403a-a9e7-ed43ca46d53f", 00:33:30.566 "strip_size_kb": 0, 00:33:30.566 "state": "online", 00:33:30.566 "raid_level": "raid1", 00:33:30.566 "superblock": true, 00:33:30.566 "num_base_bdevs": 2, 00:33:30.566 "num_base_bdevs_discovered": 2, 00:33:30.566 "num_base_bdevs_operational": 2, 00:33:30.566 "base_bdevs_list": [ 00:33:30.566 { 00:33:30.566 "name": "BaseBdev1", 00:33:30.566 "uuid": "96fb8a9d-c935-5f88-8160-f9f0657f96b8", 00:33:30.566 "is_configured": true, 00:33:30.566 "data_offset": 2048, 00:33:30.566 "data_size": 63488 00:33:30.566 }, 00:33:30.566 { 00:33:30.566 "name": "BaseBdev2", 00:33:30.566 "uuid": "153fcda0-508d-5e0c-bf98-d977185f1572", 00:33:30.566 "is_configured": true, 00:33:30.566 "data_offset": 2048, 00:33:30.566 "data_size": 63488 00:33:30.566 } 00:33:30.566 ] 00:33:30.566 }' 00:33:30.566 17:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:30.566 17:30:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.826 17:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:30.826 17:30:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:30.826 [2024-11-26 17:30:31.503178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.772 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:31.772 "name": "raid_bdev1", 00:33:31.772 "uuid": "92879466-025e-403a-a9e7-ed43ca46d53f", 00:33:31.772 "strip_size_kb": 0, 00:33:31.772 "state": "online", 00:33:31.772 "raid_level": "raid1", 00:33:31.772 "superblock": true, 00:33:31.772 "num_base_bdevs": 2, 00:33:31.772 "num_base_bdevs_discovered": 2, 00:33:31.773 "num_base_bdevs_operational": 2, 00:33:31.773 "base_bdevs_list": [ 00:33:31.773 { 00:33:31.773 "name": "BaseBdev1", 00:33:31.773 "uuid": "96fb8a9d-c935-5f88-8160-f9f0657f96b8", 00:33:31.773 "is_configured": true, 00:33:31.773 "data_offset": 2048, 00:33:31.773 "data_size": 63488 00:33:31.773 }, 00:33:31.773 { 00:33:31.773 "name": "BaseBdev2", 00:33:31.773 "uuid": "153fcda0-508d-5e0c-bf98-d977185f1572", 00:33:31.773 "is_configured": true, 00:33:31.773 "data_offset": 2048, 00:33:31.773 "data_size": 63488 00:33:31.773 } 00:33:31.773 ] 00:33:31.773 }' 00:33:31.773 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:31.773 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.359 [2024-11-26 17:30:32.813152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:32.359 [2024-11-26 17:30:32.813198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:32.359 [2024-11-26 17:30:32.816217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:32.359 [2024-11-26 17:30:32.816332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:32.359 [2024-11-26 17:30:32.816435] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:32.359 [2024-11-26 17:30:32.816450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:32.359 { 00:33:32.359 "results": [ 00:33:32.359 { 00:33:32.359 "job": "raid_bdev1", 00:33:32.359 "core_mask": "0x1", 00:33:32.359 "workload": "randrw", 00:33:32.359 "percentage": 50, 00:33:32.359 "status": "finished", 00:33:32.359 "queue_depth": 1, 00:33:32.359 "io_size": 131072, 00:33:32.359 "runtime": 1.310439, 00:33:32.359 "iops": 17039.328041976773, 00:33:32.359 "mibps": 2129.9160052470966, 00:33:32.359 "io_failed": 0, 00:33:32.359 "io_timeout": 0, 00:33:32.359 "avg_latency_us": 55.881881689486384, 00:33:32.359 "min_latency_us": 23.923144104803495, 00:33:32.359 "max_latency_us": 1802.955458515284 00:33:32.359 } 00:33:32.359 ], 00:33:32.359 "core_count": 1 00:33:32.359 } 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63774 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63774 ']' 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63774 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63774 00:33:32.359 killing process with pid 63774 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63774' 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63774 00:33:32.359 [2024-11-26 17:30:32.860806] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:32.359 17:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63774 00:33:32.359 [2024-11-26 17:30:32.999362] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:33.740 17:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SaCpale3dd 00:33:33.740 17:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:33.740 17:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:33.740 17:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:33:33.740 17:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:33:33.740 17:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:33.740 17:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:33.740 17:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:33:33.740 00:33:33.740 real 0m4.357s 00:33:33.740 user 0m5.210s 00:33:33.740 sys 0m0.499s 00:33:33.740 ************************************ 00:33:33.740 END TEST raid_read_error_test 00:33:33.740 ************************************ 00:33:33.740 17:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.740 17:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.740 17:30:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:33:33.740 17:30:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:33.740 17:30:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:33.740 17:30:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:33.740 ************************************ 00:33:33.740 START TEST raid_write_error_test 00:33:33.740 ************************************ 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:33:33.740 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jYwzOnEzz1 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63915 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63915 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63915 ']' 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.741 17:30:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.741 [2024-11-26 17:30:34.381153] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:33.741 [2024-11-26 17:30:34.381363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63915 ] 00:33:34.000 [2024-11-26 17:30:34.535933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.000 [2024-11-26 17:30:34.657227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.260 [2024-11-26 17:30:34.866545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:34.260 [2024-11-26 17:30:34.866609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.831 BaseBdev1_malloc 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.831 true 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.831 [2024-11-26 17:30:35.276400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:33:34.831 [2024-11-26 17:30:35.276459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:34.831 [2024-11-26 17:30:35.276495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:34.831 [2024-11-26 17:30:35.276507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:34.831 [2024-11-26 17:30:35.278629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:34.831 [2024-11-26 17:30:35.278669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:34.831 BaseBdev1 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.831 BaseBdev2_malloc 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.831 true 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.831 [2024-11-26 17:30:35.344788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:34.831 [2024-11-26 17:30:35.344879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:34.831 [2024-11-26 17:30:35.344918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:34.831 [2024-11-26 17:30:35.344959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:34.831 [2024-11-26 17:30:35.347115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:34.831 [2024-11-26 17:30:35.347193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:34.831 BaseBdev2 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.831 [2024-11-26 17:30:35.352829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:34.831 [2024-11-26 17:30:35.354662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:34.831 [2024-11-26 17:30:35.354859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:34.831 [2024-11-26 17:30:35.354875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:34.831 [2024-11-26 17:30:35.355114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:34.831 [2024-11-26 17:30:35.355285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:34.831 [2024-11-26 17:30:35.355296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:34.831 [2024-11-26 17:30:35.355458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:34.831 "name": "raid_bdev1", 00:33:34.831 "uuid": "85d15d0d-57c8-4fe2-8c0b-9cbfe38d824e", 00:33:34.831 "strip_size_kb": 0, 00:33:34.831 "state": "online", 00:33:34.831 "raid_level": "raid1", 00:33:34.831 "superblock": true, 00:33:34.831 "num_base_bdevs": 2, 00:33:34.831 "num_base_bdevs_discovered": 2, 00:33:34.831 "num_base_bdevs_operational": 2, 00:33:34.831 "base_bdevs_list": [ 00:33:34.831 { 00:33:34.831 "name": "BaseBdev1", 00:33:34.831 "uuid": "e2cc61bd-25e8-5022-a822-ede0fbbc071b", 00:33:34.831 "is_configured": true, 00:33:34.831 "data_offset": 2048, 00:33:34.831 "data_size": 63488 00:33:34.831 }, 00:33:34.831 { 00:33:34.831 "name": "BaseBdev2", 00:33:34.831 "uuid": "37e75177-4fff-5212-bc19-0b8b98c5df68", 00:33:34.831 "is_configured": true, 00:33:34.831 "data_offset": 2048, 00:33:34.831 "data_size": 63488 00:33:34.831 } 00:33:34.831 ] 00:33:34.831 }' 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:34.831 17:30:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.091 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:35.091 17:30:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:35.352 [2024-11-26 17:30:35.841170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:36.293 [2024-11-26 17:30:36.762103] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:33:36.293 [2024-11-26 17:30:36.762269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:36.293 [2024-11-26 17:30:36.762496] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.293 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:36.293 "name": "raid_bdev1", 00:33:36.293 "uuid": "85d15d0d-57c8-4fe2-8c0b-9cbfe38d824e", 00:33:36.293 "strip_size_kb": 0, 00:33:36.293 "state": "online", 00:33:36.293 "raid_level": "raid1", 00:33:36.293 "superblock": true, 00:33:36.293 "num_base_bdevs": 2, 00:33:36.293 "num_base_bdevs_discovered": 1, 00:33:36.293 "num_base_bdevs_operational": 1, 00:33:36.293 "base_bdevs_list": [ 00:33:36.293 { 00:33:36.293 "name": null, 00:33:36.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.293 "is_configured": false, 00:33:36.293 "data_offset": 0, 00:33:36.293 "data_size": 63488 00:33:36.293 }, 00:33:36.293 { 00:33:36.293 "name": "BaseBdev2", 00:33:36.293 "uuid": "37e75177-4fff-5212-bc19-0b8b98c5df68", 00:33:36.293 "is_configured": true, 00:33:36.294 "data_offset": 2048, 00:33:36.294 "data_size": 63488 00:33:36.294 } 00:33:36.294 ] 00:33:36.294 }' 00:33:36.294 17:30:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:36.294 17:30:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:36.554 17:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:36.554 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.554 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:36.554 [2024-11-26 17:30:37.220009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:36.554 [2024-11-26 17:30:37.220120] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:36.554 [2024-11-26 17:30:37.222836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:36.554 [2024-11-26 17:30:37.222873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:36.554 [2024-11-26 17:30:37.222928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:36.554 [2024-11-26 17:30:37.222940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:36.554 { 00:33:36.554 "results": [ 00:33:36.554 { 00:33:36.554 "job": "raid_bdev1", 00:33:36.554 "core_mask": "0x1", 00:33:36.554 "workload": "randrw", 00:33:36.554 "percentage": 50, 00:33:36.554 "status": "finished", 00:33:36.554 "queue_depth": 1, 00:33:36.554 "io_size": 131072, 00:33:36.554 "runtime": 1.379692, 00:33:36.554 "iops": 20737.961805968287, 00:33:36.554 "mibps": 2592.245225746036, 00:33:36.554 "io_failed": 0, 00:33:36.554 "io_timeout": 0, 00:33:36.554 "avg_latency_us": 45.52028067742059, 00:33:36.554 "min_latency_us": 22.581659388646287, 00:33:36.554 "max_latency_us": 1423.7624454148472 00:33:36.554 } 00:33:36.554 ], 00:33:36.554 "core_count": 1 00:33:36.554 } 00:33:36.554 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.554 17:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63915 00:33:36.554 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63915 ']' 00:33:36.554 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63915 00:33:36.554 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:33:36.554 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.554 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63915 00:33:36.814 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.814 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.814 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63915' 00:33:36.814 killing process with pid 63915 00:33:36.814 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63915 00:33:36.814 [2024-11-26 17:30:37.272009] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:36.814 17:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63915 00:33:36.814 [2024-11-26 17:30:37.408963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:38.197 17:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:38.197 17:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jYwzOnEzz1 00:33:38.197 17:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:38.197 17:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:33:38.197 17:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:33:38.197 17:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:38.197 17:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:38.197 17:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:33:38.197 00:33:38.197 real 0m4.350s 00:33:38.197 user 0m5.177s 00:33:38.197 sys 0m0.535s 00:33:38.197 17:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.197 17:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.197 ************************************ 00:33:38.197 END TEST raid_write_error_test 00:33:38.197 ************************************ 00:33:38.197 17:30:38 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:33:38.197 17:30:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:33:38.197 17:30:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:33:38.197 17:30:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:38.197 17:30:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.197 17:30:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:38.197 ************************************ 00:33:38.197 START TEST raid_state_function_test 00:33:38.197 ************************************ 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:38.197 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64059 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64059' 00:33:38.198 Process raid pid: 64059 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64059 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64059 ']' 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.198 17:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.198 [2024-11-26 17:30:38.795580] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:38.198 [2024-11-26 17:30:38.795775] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.457 [2024-11-26 17:30:38.971020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.457 [2024-11-26 17:30:39.088746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.725 [2024-11-26 17:30:39.309295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:38.725 [2024-11-26 17:30:39.309434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:38.988 17:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.988 17:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:33:38.988 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:38.988 17:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.988 17:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.988 [2024-11-26 17:30:39.635926] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:38.988 [2024-11-26 17:30:39.635986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:38.988 [2024-11-26 17:30:39.635998] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:38.988 [2024-11-26 17:30:39.636009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:38.988 [2024-11-26 17:30:39.636016] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:38.989 [2024-11-26 17:30:39.636026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:38.989 17:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.248 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:39.248 "name": "Existed_Raid", 00:33:39.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.248 "strip_size_kb": 64, 00:33:39.248 "state": "configuring", 00:33:39.248 "raid_level": "raid0", 00:33:39.248 "superblock": false, 00:33:39.248 "num_base_bdevs": 3, 00:33:39.248 "num_base_bdevs_discovered": 0, 00:33:39.248 "num_base_bdevs_operational": 3, 00:33:39.248 "base_bdevs_list": [ 00:33:39.248 { 00:33:39.248 "name": "BaseBdev1", 00:33:39.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.248 "is_configured": false, 00:33:39.248 "data_offset": 0, 00:33:39.248 "data_size": 0 00:33:39.248 }, 00:33:39.248 { 00:33:39.248 "name": "BaseBdev2", 00:33:39.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.248 "is_configured": false, 00:33:39.248 "data_offset": 0, 00:33:39.248 "data_size": 0 00:33:39.248 }, 00:33:39.248 { 00:33:39.248 "name": "BaseBdev3", 00:33:39.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.248 "is_configured": false, 00:33:39.248 "data_offset": 0, 00:33:39.248 "data_size": 0 00:33:39.248 } 00:33:39.248 ] 00:33:39.248 }' 00:33:39.248 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:39.248 17:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.509 [2024-11-26 17:30:40.063161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:39.509 [2024-11-26 17:30:40.063204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.509 [2024-11-26 17:30:40.071132] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:39.509 [2024-11-26 17:30:40.071180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:39.509 [2024-11-26 17:30:40.071189] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:39.509 [2024-11-26 17:30:40.071198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:39.509 [2024-11-26 17:30:40.071204] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:39.509 [2024-11-26 17:30:40.071213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.509 [2024-11-26 17:30:40.117616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:39.509 BaseBdev1 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.509 [ 00:33:39.509 { 00:33:39.509 "name": "BaseBdev1", 00:33:39.509 "aliases": [ 00:33:39.509 "06900c7f-45d0-4413-af6b-3b8d68a30c49" 00:33:39.509 ], 00:33:39.509 "product_name": "Malloc disk", 00:33:39.509 "block_size": 512, 00:33:39.509 "num_blocks": 65536, 00:33:39.509 "uuid": "06900c7f-45d0-4413-af6b-3b8d68a30c49", 00:33:39.509 "assigned_rate_limits": { 00:33:39.509 "rw_ios_per_sec": 0, 00:33:39.509 "rw_mbytes_per_sec": 0, 00:33:39.509 "r_mbytes_per_sec": 0, 00:33:39.509 "w_mbytes_per_sec": 0 00:33:39.509 }, 00:33:39.509 "claimed": true, 00:33:39.509 "claim_type": "exclusive_write", 00:33:39.509 "zoned": false, 00:33:39.509 "supported_io_types": { 00:33:39.509 "read": true, 00:33:39.509 "write": true, 00:33:39.509 "unmap": true, 00:33:39.509 "flush": true, 00:33:39.509 "reset": true, 00:33:39.509 "nvme_admin": false, 00:33:39.509 "nvme_io": false, 00:33:39.509 "nvme_io_md": false, 00:33:39.509 "write_zeroes": true, 00:33:39.509 "zcopy": true, 00:33:39.509 "get_zone_info": false, 00:33:39.509 "zone_management": false, 00:33:39.509 "zone_append": false, 00:33:39.509 "compare": false, 00:33:39.509 "compare_and_write": false, 00:33:39.509 "abort": true, 00:33:39.509 "seek_hole": false, 00:33:39.509 "seek_data": false, 00:33:39.509 "copy": true, 00:33:39.509 "nvme_iov_md": false 00:33:39.509 }, 00:33:39.509 "memory_domains": [ 00:33:39.509 { 00:33:39.509 "dma_device_id": "system", 00:33:39.509 "dma_device_type": 1 00:33:39.509 }, 00:33:39.509 { 00:33:39.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:39.509 "dma_device_type": 2 00:33:39.509 } 00:33:39.509 ], 00:33:39.509 "driver_specific": {} 00:33:39.509 } 00:33:39.509 ] 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:39.509 "name": "Existed_Raid", 00:33:39.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.509 "strip_size_kb": 64, 00:33:39.509 "state": "configuring", 00:33:39.509 "raid_level": "raid0", 00:33:39.509 "superblock": false, 00:33:39.509 "num_base_bdevs": 3, 00:33:39.509 "num_base_bdevs_discovered": 1, 00:33:39.509 "num_base_bdevs_operational": 3, 00:33:39.509 "base_bdevs_list": [ 00:33:39.509 { 00:33:39.509 "name": "BaseBdev1", 00:33:39.509 "uuid": "06900c7f-45d0-4413-af6b-3b8d68a30c49", 00:33:39.509 "is_configured": true, 00:33:39.509 "data_offset": 0, 00:33:39.509 "data_size": 65536 00:33:39.509 }, 00:33:39.509 { 00:33:39.509 "name": "BaseBdev2", 00:33:39.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.509 "is_configured": false, 00:33:39.509 "data_offset": 0, 00:33:39.509 "data_size": 0 00:33:39.509 }, 00:33:39.509 { 00:33:39.509 "name": "BaseBdev3", 00:33:39.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.509 "is_configured": false, 00:33:39.509 "data_offset": 0, 00:33:39.509 "data_size": 0 00:33:39.509 } 00:33:39.509 ] 00:33:39.509 }' 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:39.509 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.079 [2024-11-26 17:30:40.596863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:40.079 [2024-11-26 17:30:40.596983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.079 [2024-11-26 17:30:40.608908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:40.079 [2024-11-26 17:30:40.610947] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:40.079 [2024-11-26 17:30:40.611028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:40.079 [2024-11-26 17:30:40.611063] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:40.079 [2024-11-26 17:30:40.611106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:40.079 "name": "Existed_Raid", 00:33:40.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.079 "strip_size_kb": 64, 00:33:40.079 "state": "configuring", 00:33:40.079 "raid_level": "raid0", 00:33:40.079 "superblock": false, 00:33:40.079 "num_base_bdevs": 3, 00:33:40.079 "num_base_bdevs_discovered": 1, 00:33:40.079 "num_base_bdevs_operational": 3, 00:33:40.079 "base_bdevs_list": [ 00:33:40.079 { 00:33:40.079 "name": "BaseBdev1", 00:33:40.079 "uuid": "06900c7f-45d0-4413-af6b-3b8d68a30c49", 00:33:40.079 "is_configured": true, 00:33:40.079 "data_offset": 0, 00:33:40.079 "data_size": 65536 00:33:40.079 }, 00:33:40.079 { 00:33:40.079 "name": "BaseBdev2", 00:33:40.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.079 "is_configured": false, 00:33:40.079 "data_offset": 0, 00:33:40.079 "data_size": 0 00:33:40.079 }, 00:33:40.079 { 00:33:40.079 "name": "BaseBdev3", 00:33:40.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.079 "is_configured": false, 00:33:40.079 "data_offset": 0, 00:33:40.079 "data_size": 0 00:33:40.079 } 00:33:40.079 ] 00:33:40.079 }' 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:40.079 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.673 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:40.673 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.673 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.674 [2024-11-26 17:30:41.156363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:40.674 BaseBdev2 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.674 [ 00:33:40.674 { 00:33:40.674 "name": "BaseBdev2", 00:33:40.674 "aliases": [ 00:33:40.674 "44cf4752-dc35-40dc-a41e-290ec8b25e2f" 00:33:40.674 ], 00:33:40.674 "product_name": "Malloc disk", 00:33:40.674 "block_size": 512, 00:33:40.674 "num_blocks": 65536, 00:33:40.674 "uuid": "44cf4752-dc35-40dc-a41e-290ec8b25e2f", 00:33:40.674 "assigned_rate_limits": { 00:33:40.674 "rw_ios_per_sec": 0, 00:33:40.674 "rw_mbytes_per_sec": 0, 00:33:40.674 "r_mbytes_per_sec": 0, 00:33:40.674 "w_mbytes_per_sec": 0 00:33:40.674 }, 00:33:40.674 "claimed": true, 00:33:40.674 "claim_type": "exclusive_write", 00:33:40.674 "zoned": false, 00:33:40.674 "supported_io_types": { 00:33:40.674 "read": true, 00:33:40.674 "write": true, 00:33:40.674 "unmap": true, 00:33:40.674 "flush": true, 00:33:40.674 "reset": true, 00:33:40.674 "nvme_admin": false, 00:33:40.674 "nvme_io": false, 00:33:40.674 "nvme_io_md": false, 00:33:40.674 "write_zeroes": true, 00:33:40.674 "zcopy": true, 00:33:40.674 "get_zone_info": false, 00:33:40.674 "zone_management": false, 00:33:40.674 "zone_append": false, 00:33:40.674 "compare": false, 00:33:40.674 "compare_and_write": false, 00:33:40.674 "abort": true, 00:33:40.674 "seek_hole": false, 00:33:40.674 "seek_data": false, 00:33:40.674 "copy": true, 00:33:40.674 "nvme_iov_md": false 00:33:40.674 }, 00:33:40.674 "memory_domains": [ 00:33:40.674 { 00:33:40.674 "dma_device_id": "system", 00:33:40.674 "dma_device_type": 1 00:33:40.674 }, 00:33:40.674 { 00:33:40.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:40.674 "dma_device_type": 2 00:33:40.674 } 00:33:40.674 ], 00:33:40.674 "driver_specific": {} 00:33:40.674 } 00:33:40.674 ] 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:40.674 "name": "Existed_Raid", 00:33:40.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.674 "strip_size_kb": 64, 00:33:40.674 "state": "configuring", 00:33:40.674 "raid_level": "raid0", 00:33:40.674 "superblock": false, 00:33:40.674 "num_base_bdevs": 3, 00:33:40.674 "num_base_bdevs_discovered": 2, 00:33:40.674 "num_base_bdevs_operational": 3, 00:33:40.674 "base_bdevs_list": [ 00:33:40.674 { 00:33:40.674 "name": "BaseBdev1", 00:33:40.674 "uuid": "06900c7f-45d0-4413-af6b-3b8d68a30c49", 00:33:40.674 "is_configured": true, 00:33:40.674 "data_offset": 0, 00:33:40.674 "data_size": 65536 00:33:40.674 }, 00:33:40.674 { 00:33:40.674 "name": "BaseBdev2", 00:33:40.674 "uuid": "44cf4752-dc35-40dc-a41e-290ec8b25e2f", 00:33:40.674 "is_configured": true, 00:33:40.674 "data_offset": 0, 00:33:40.674 "data_size": 65536 00:33:40.674 }, 00:33:40.674 { 00:33:40.674 "name": "BaseBdev3", 00:33:40.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.674 "is_configured": false, 00:33:40.674 "data_offset": 0, 00:33:40.674 "data_size": 0 00:33:40.674 } 00:33:40.674 ] 00:33:40.674 }' 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:40.674 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.932 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:40.932 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.932 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.192 [2024-11-26 17:30:41.642249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:41.192 [2024-11-26 17:30:41.642303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:41.192 [2024-11-26 17:30:41.642316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:41.192 [2024-11-26 17:30:41.642631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:41.192 [2024-11-26 17:30:41.642814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:41.192 [2024-11-26 17:30:41.642825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:41.192 [2024-11-26 17:30:41.643124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:41.192 BaseBdev3 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.192 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.192 [ 00:33:41.192 { 00:33:41.192 "name": "BaseBdev3", 00:33:41.192 "aliases": [ 00:33:41.192 "5e191964-1215-4c67-9b59-bb7a21275509" 00:33:41.192 ], 00:33:41.192 "product_name": "Malloc disk", 00:33:41.192 "block_size": 512, 00:33:41.192 "num_blocks": 65536, 00:33:41.192 "uuid": "5e191964-1215-4c67-9b59-bb7a21275509", 00:33:41.192 "assigned_rate_limits": { 00:33:41.192 "rw_ios_per_sec": 0, 00:33:41.192 "rw_mbytes_per_sec": 0, 00:33:41.192 "r_mbytes_per_sec": 0, 00:33:41.192 "w_mbytes_per_sec": 0 00:33:41.192 }, 00:33:41.192 "claimed": true, 00:33:41.193 "claim_type": "exclusive_write", 00:33:41.193 "zoned": false, 00:33:41.193 "supported_io_types": { 00:33:41.193 "read": true, 00:33:41.193 "write": true, 00:33:41.193 "unmap": true, 00:33:41.193 "flush": true, 00:33:41.193 "reset": true, 00:33:41.193 "nvme_admin": false, 00:33:41.193 "nvme_io": false, 00:33:41.193 "nvme_io_md": false, 00:33:41.193 "write_zeroes": true, 00:33:41.193 "zcopy": true, 00:33:41.193 "get_zone_info": false, 00:33:41.193 "zone_management": false, 00:33:41.193 "zone_append": false, 00:33:41.193 "compare": false, 00:33:41.193 "compare_and_write": false, 00:33:41.193 "abort": true, 00:33:41.193 "seek_hole": false, 00:33:41.193 "seek_data": false, 00:33:41.193 "copy": true, 00:33:41.193 "nvme_iov_md": false 00:33:41.193 }, 00:33:41.193 "memory_domains": [ 00:33:41.193 { 00:33:41.193 "dma_device_id": "system", 00:33:41.193 "dma_device_type": 1 00:33:41.193 }, 00:33:41.193 { 00:33:41.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.193 "dma_device_type": 2 00:33:41.193 } 00:33:41.193 ], 00:33:41.193 "driver_specific": {} 00:33:41.193 } 00:33:41.193 ] 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:41.193 "name": "Existed_Raid", 00:33:41.193 "uuid": "d29a2a41-52ee-4e18-80dc-0dcaf1dca7fe", 00:33:41.193 "strip_size_kb": 64, 00:33:41.193 "state": "online", 00:33:41.193 "raid_level": "raid0", 00:33:41.193 "superblock": false, 00:33:41.193 "num_base_bdevs": 3, 00:33:41.193 "num_base_bdevs_discovered": 3, 00:33:41.193 "num_base_bdevs_operational": 3, 00:33:41.193 "base_bdevs_list": [ 00:33:41.193 { 00:33:41.193 "name": "BaseBdev1", 00:33:41.193 "uuid": "06900c7f-45d0-4413-af6b-3b8d68a30c49", 00:33:41.193 "is_configured": true, 00:33:41.193 "data_offset": 0, 00:33:41.193 "data_size": 65536 00:33:41.193 }, 00:33:41.193 { 00:33:41.193 "name": "BaseBdev2", 00:33:41.193 "uuid": "44cf4752-dc35-40dc-a41e-290ec8b25e2f", 00:33:41.193 "is_configured": true, 00:33:41.193 "data_offset": 0, 00:33:41.193 "data_size": 65536 00:33:41.193 }, 00:33:41.193 { 00:33:41.193 "name": "BaseBdev3", 00:33:41.193 "uuid": "5e191964-1215-4c67-9b59-bb7a21275509", 00:33:41.193 "is_configured": true, 00:33:41.193 "data_offset": 0, 00:33:41.193 "data_size": 65536 00:33:41.193 } 00:33:41.193 ] 00:33:41.193 }' 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:41.193 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:41.453 [2024-11-26 17:30:42.053984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:41.453 "name": "Existed_Raid", 00:33:41.453 "aliases": [ 00:33:41.453 "d29a2a41-52ee-4e18-80dc-0dcaf1dca7fe" 00:33:41.453 ], 00:33:41.453 "product_name": "Raid Volume", 00:33:41.453 "block_size": 512, 00:33:41.453 "num_blocks": 196608, 00:33:41.453 "uuid": "d29a2a41-52ee-4e18-80dc-0dcaf1dca7fe", 00:33:41.453 "assigned_rate_limits": { 00:33:41.453 "rw_ios_per_sec": 0, 00:33:41.453 "rw_mbytes_per_sec": 0, 00:33:41.453 "r_mbytes_per_sec": 0, 00:33:41.453 "w_mbytes_per_sec": 0 00:33:41.453 }, 00:33:41.453 "claimed": false, 00:33:41.453 "zoned": false, 00:33:41.453 "supported_io_types": { 00:33:41.453 "read": true, 00:33:41.453 "write": true, 00:33:41.453 "unmap": true, 00:33:41.453 "flush": true, 00:33:41.453 "reset": true, 00:33:41.453 "nvme_admin": false, 00:33:41.453 "nvme_io": false, 00:33:41.453 "nvme_io_md": false, 00:33:41.453 "write_zeroes": true, 00:33:41.453 "zcopy": false, 00:33:41.453 "get_zone_info": false, 00:33:41.453 "zone_management": false, 00:33:41.453 "zone_append": false, 00:33:41.453 "compare": false, 00:33:41.453 "compare_and_write": false, 00:33:41.453 "abort": false, 00:33:41.453 "seek_hole": false, 00:33:41.453 "seek_data": false, 00:33:41.453 "copy": false, 00:33:41.453 "nvme_iov_md": false 00:33:41.453 }, 00:33:41.453 "memory_domains": [ 00:33:41.453 { 00:33:41.453 "dma_device_id": "system", 00:33:41.453 "dma_device_type": 1 00:33:41.453 }, 00:33:41.453 { 00:33:41.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.453 "dma_device_type": 2 00:33:41.453 }, 00:33:41.453 { 00:33:41.453 "dma_device_id": "system", 00:33:41.453 "dma_device_type": 1 00:33:41.453 }, 00:33:41.453 { 00:33:41.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.453 "dma_device_type": 2 00:33:41.453 }, 00:33:41.453 { 00:33:41.453 "dma_device_id": "system", 00:33:41.453 "dma_device_type": 1 00:33:41.453 }, 00:33:41.453 { 00:33:41.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.453 "dma_device_type": 2 00:33:41.453 } 00:33:41.453 ], 00:33:41.453 "driver_specific": { 00:33:41.453 "raid": { 00:33:41.453 "uuid": "d29a2a41-52ee-4e18-80dc-0dcaf1dca7fe", 00:33:41.453 "strip_size_kb": 64, 00:33:41.453 "state": "online", 00:33:41.453 "raid_level": "raid0", 00:33:41.453 "superblock": false, 00:33:41.453 "num_base_bdevs": 3, 00:33:41.453 "num_base_bdevs_discovered": 3, 00:33:41.453 "num_base_bdevs_operational": 3, 00:33:41.453 "base_bdevs_list": [ 00:33:41.453 { 00:33:41.453 "name": "BaseBdev1", 00:33:41.453 "uuid": "06900c7f-45d0-4413-af6b-3b8d68a30c49", 00:33:41.453 "is_configured": true, 00:33:41.453 "data_offset": 0, 00:33:41.453 "data_size": 65536 00:33:41.453 }, 00:33:41.453 { 00:33:41.453 "name": "BaseBdev2", 00:33:41.453 "uuid": "44cf4752-dc35-40dc-a41e-290ec8b25e2f", 00:33:41.453 "is_configured": true, 00:33:41.453 "data_offset": 0, 00:33:41.453 "data_size": 65536 00:33:41.453 }, 00:33:41.453 { 00:33:41.453 "name": "BaseBdev3", 00:33:41.453 "uuid": "5e191964-1215-4c67-9b59-bb7a21275509", 00:33:41.453 "is_configured": true, 00:33:41.453 "data_offset": 0, 00:33:41.453 "data_size": 65536 00:33:41.453 } 00:33:41.453 ] 00:33:41.453 } 00:33:41.453 } 00:33:41.453 }' 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:41.453 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:41.454 BaseBdev2 00:33:41.454 BaseBdev3' 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.713 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.713 [2024-11-26 17:30:42.321248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:41.713 [2024-11-26 17:30:42.321282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:41.713 [2024-11-26 17:30:42.321340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:41.973 "name": "Existed_Raid", 00:33:41.973 "uuid": "d29a2a41-52ee-4e18-80dc-0dcaf1dca7fe", 00:33:41.973 "strip_size_kb": 64, 00:33:41.973 "state": "offline", 00:33:41.973 "raid_level": "raid0", 00:33:41.973 "superblock": false, 00:33:41.973 "num_base_bdevs": 3, 00:33:41.973 "num_base_bdevs_discovered": 2, 00:33:41.973 "num_base_bdevs_operational": 2, 00:33:41.973 "base_bdevs_list": [ 00:33:41.973 { 00:33:41.973 "name": null, 00:33:41.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.973 "is_configured": false, 00:33:41.973 "data_offset": 0, 00:33:41.973 "data_size": 65536 00:33:41.973 }, 00:33:41.973 { 00:33:41.973 "name": "BaseBdev2", 00:33:41.973 "uuid": "44cf4752-dc35-40dc-a41e-290ec8b25e2f", 00:33:41.973 "is_configured": true, 00:33:41.973 "data_offset": 0, 00:33:41.973 "data_size": 65536 00:33:41.973 }, 00:33:41.973 { 00:33:41.973 "name": "BaseBdev3", 00:33:41.973 "uuid": "5e191964-1215-4c67-9b59-bb7a21275509", 00:33:41.973 "is_configured": true, 00:33:41.973 "data_offset": 0, 00:33:41.973 "data_size": 65536 00:33:41.973 } 00:33:41.973 ] 00:33:41.973 }' 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:41.973 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.232 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:42.232 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:42.232 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.232 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.232 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:42.232 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.232 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.492 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:42.492 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:42.492 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:42.492 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.492 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.492 [2024-11-26 17:30:42.941866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:42.492 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.492 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:42.492 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:42.492 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.492 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.492 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.492 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:42.492 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.492 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:42.492 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:42.493 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:42.493 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.493 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.493 [2024-11-26 17:30:43.104091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:42.493 [2024-11-26 17:30:43.104229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.752 BaseBdev2 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:42.752 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.753 [ 00:33:42.753 { 00:33:42.753 "name": "BaseBdev2", 00:33:42.753 "aliases": [ 00:33:42.753 "c9705314-382e-4acd-adcb-9fbfc741f869" 00:33:42.753 ], 00:33:42.753 "product_name": "Malloc disk", 00:33:42.753 "block_size": 512, 00:33:42.753 "num_blocks": 65536, 00:33:42.753 "uuid": "c9705314-382e-4acd-adcb-9fbfc741f869", 00:33:42.753 "assigned_rate_limits": { 00:33:42.753 "rw_ios_per_sec": 0, 00:33:42.753 "rw_mbytes_per_sec": 0, 00:33:42.753 "r_mbytes_per_sec": 0, 00:33:42.753 "w_mbytes_per_sec": 0 00:33:42.753 }, 00:33:42.753 "claimed": false, 00:33:42.753 "zoned": false, 00:33:42.753 "supported_io_types": { 00:33:42.753 "read": true, 00:33:42.753 "write": true, 00:33:42.753 "unmap": true, 00:33:42.753 "flush": true, 00:33:42.753 "reset": true, 00:33:42.753 "nvme_admin": false, 00:33:42.753 "nvme_io": false, 00:33:42.753 "nvme_io_md": false, 00:33:42.753 "write_zeroes": true, 00:33:42.753 "zcopy": true, 00:33:42.753 "get_zone_info": false, 00:33:42.753 "zone_management": false, 00:33:42.753 "zone_append": false, 00:33:42.753 "compare": false, 00:33:42.753 "compare_and_write": false, 00:33:42.753 "abort": true, 00:33:42.753 "seek_hole": false, 00:33:42.753 "seek_data": false, 00:33:42.753 "copy": true, 00:33:42.753 "nvme_iov_md": false 00:33:42.753 }, 00:33:42.753 "memory_domains": [ 00:33:42.753 { 00:33:42.753 "dma_device_id": "system", 00:33:42.753 "dma_device_type": 1 00:33:42.753 }, 00:33:42.753 { 00:33:42.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:42.753 "dma_device_type": 2 00:33:42.753 } 00:33:42.753 ], 00:33:42.753 "driver_specific": {} 00:33:42.753 } 00:33:42.753 ] 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.753 BaseBdev3 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.753 [ 00:33:42.753 { 00:33:42.753 "name": "BaseBdev3", 00:33:42.753 "aliases": [ 00:33:42.753 "bf39025c-3463-411f-889a-a4eab0d14a25" 00:33:42.753 ], 00:33:42.753 "product_name": "Malloc disk", 00:33:42.753 "block_size": 512, 00:33:42.753 "num_blocks": 65536, 00:33:42.753 "uuid": "bf39025c-3463-411f-889a-a4eab0d14a25", 00:33:42.753 "assigned_rate_limits": { 00:33:42.753 "rw_ios_per_sec": 0, 00:33:42.753 "rw_mbytes_per_sec": 0, 00:33:42.753 "r_mbytes_per_sec": 0, 00:33:42.753 "w_mbytes_per_sec": 0 00:33:42.753 }, 00:33:42.753 "claimed": false, 00:33:42.753 "zoned": false, 00:33:42.753 "supported_io_types": { 00:33:42.753 "read": true, 00:33:42.753 "write": true, 00:33:42.753 "unmap": true, 00:33:42.753 "flush": true, 00:33:42.753 "reset": true, 00:33:42.753 "nvme_admin": false, 00:33:42.753 "nvme_io": false, 00:33:42.753 "nvme_io_md": false, 00:33:42.753 "write_zeroes": true, 00:33:42.753 "zcopy": true, 00:33:42.753 "get_zone_info": false, 00:33:42.753 "zone_management": false, 00:33:42.753 "zone_append": false, 00:33:42.753 "compare": false, 00:33:42.753 "compare_and_write": false, 00:33:42.753 "abort": true, 00:33:42.753 "seek_hole": false, 00:33:42.753 "seek_data": false, 00:33:42.753 "copy": true, 00:33:42.753 "nvme_iov_md": false 00:33:42.753 }, 00:33:42.753 "memory_domains": [ 00:33:42.753 { 00:33:42.753 "dma_device_id": "system", 00:33:42.753 "dma_device_type": 1 00:33:42.753 }, 00:33:42.753 { 00:33:42.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:42.753 "dma_device_type": 2 00:33:42.753 } 00:33:42.753 ], 00:33:42.753 "driver_specific": {} 00:33:42.753 } 00:33:42.753 ] 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.753 [2024-11-26 17:30:43.438662] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:42.753 [2024-11-26 17:30:43.438821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:42.753 [2024-11-26 17:30:43.438880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:42.753 [2024-11-26 17:30:43.441150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:42.753 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:43.012 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:43.012 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:43.012 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.012 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:43.012 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.012 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.012 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.012 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:43.012 "name": "Existed_Raid", 00:33:43.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.012 "strip_size_kb": 64, 00:33:43.012 "state": "configuring", 00:33:43.012 "raid_level": "raid0", 00:33:43.012 "superblock": false, 00:33:43.012 "num_base_bdevs": 3, 00:33:43.012 "num_base_bdevs_discovered": 2, 00:33:43.012 "num_base_bdevs_operational": 3, 00:33:43.012 "base_bdevs_list": [ 00:33:43.012 { 00:33:43.012 "name": "BaseBdev1", 00:33:43.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.012 "is_configured": false, 00:33:43.012 "data_offset": 0, 00:33:43.012 "data_size": 0 00:33:43.012 }, 00:33:43.012 { 00:33:43.012 "name": "BaseBdev2", 00:33:43.012 "uuid": "c9705314-382e-4acd-adcb-9fbfc741f869", 00:33:43.012 "is_configured": true, 00:33:43.012 "data_offset": 0, 00:33:43.012 "data_size": 65536 00:33:43.012 }, 00:33:43.012 { 00:33:43.012 "name": "BaseBdev3", 00:33:43.012 "uuid": "bf39025c-3463-411f-889a-a4eab0d14a25", 00:33:43.012 "is_configured": true, 00:33:43.013 "data_offset": 0, 00:33:43.013 "data_size": 65536 00:33:43.013 } 00:33:43.013 ] 00:33:43.013 }' 00:33:43.013 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:43.013 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.582 [2024-11-26 17:30:43.981758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.582 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.582 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.582 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:43.582 "name": "Existed_Raid", 00:33:43.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.582 "strip_size_kb": 64, 00:33:43.582 "state": "configuring", 00:33:43.582 "raid_level": "raid0", 00:33:43.582 "superblock": false, 00:33:43.582 "num_base_bdevs": 3, 00:33:43.582 "num_base_bdevs_discovered": 1, 00:33:43.582 "num_base_bdevs_operational": 3, 00:33:43.582 "base_bdevs_list": [ 00:33:43.582 { 00:33:43.582 "name": "BaseBdev1", 00:33:43.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.582 "is_configured": false, 00:33:43.582 "data_offset": 0, 00:33:43.582 "data_size": 0 00:33:43.582 }, 00:33:43.582 { 00:33:43.582 "name": null, 00:33:43.582 "uuid": "c9705314-382e-4acd-adcb-9fbfc741f869", 00:33:43.582 "is_configured": false, 00:33:43.582 "data_offset": 0, 00:33:43.582 "data_size": 65536 00:33:43.582 }, 00:33:43.582 { 00:33:43.582 "name": "BaseBdev3", 00:33:43.582 "uuid": "bf39025c-3463-411f-889a-a4eab0d14a25", 00:33:43.582 "is_configured": true, 00:33:43.582 "data_offset": 0, 00:33:43.582 "data_size": 65536 00:33:43.582 } 00:33:43.582 ] 00:33:43.582 }' 00:33:43.582 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:43.582 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.843 [2024-11-26 17:30:44.482670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:43.843 BaseBdev1 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.843 [ 00:33:43.843 { 00:33:43.843 "name": "BaseBdev1", 00:33:43.843 "aliases": [ 00:33:43.843 "28442189-09a9-4a32-89ba-3d61c9f7f54a" 00:33:43.843 ], 00:33:43.843 "product_name": "Malloc disk", 00:33:43.843 "block_size": 512, 00:33:43.843 "num_blocks": 65536, 00:33:43.843 "uuid": "28442189-09a9-4a32-89ba-3d61c9f7f54a", 00:33:43.843 "assigned_rate_limits": { 00:33:43.843 "rw_ios_per_sec": 0, 00:33:43.843 "rw_mbytes_per_sec": 0, 00:33:43.843 "r_mbytes_per_sec": 0, 00:33:43.843 "w_mbytes_per_sec": 0 00:33:43.843 }, 00:33:43.843 "claimed": true, 00:33:43.843 "claim_type": "exclusive_write", 00:33:43.843 "zoned": false, 00:33:43.843 "supported_io_types": { 00:33:43.843 "read": true, 00:33:43.843 "write": true, 00:33:43.843 "unmap": true, 00:33:43.843 "flush": true, 00:33:43.843 "reset": true, 00:33:43.843 "nvme_admin": false, 00:33:43.843 "nvme_io": false, 00:33:43.843 "nvme_io_md": false, 00:33:43.843 "write_zeroes": true, 00:33:43.843 "zcopy": true, 00:33:43.843 "get_zone_info": false, 00:33:43.843 "zone_management": false, 00:33:43.843 "zone_append": false, 00:33:43.843 "compare": false, 00:33:43.843 "compare_and_write": false, 00:33:43.843 "abort": true, 00:33:43.843 "seek_hole": false, 00:33:43.843 "seek_data": false, 00:33:43.843 "copy": true, 00:33:43.843 "nvme_iov_md": false 00:33:43.843 }, 00:33:43.843 "memory_domains": [ 00:33:43.843 { 00:33:43.843 "dma_device_id": "system", 00:33:43.843 "dma_device_type": 1 00:33:43.843 }, 00:33:43.843 { 00:33:43.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:43.843 "dma_device_type": 2 00:33:43.843 } 00:33:43.843 ], 00:33:43.843 "driver_specific": {} 00:33:43.843 } 00:33:43.843 ] 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:43.843 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.103 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.103 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:44.103 "name": "Existed_Raid", 00:33:44.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.103 "strip_size_kb": 64, 00:33:44.103 "state": "configuring", 00:33:44.103 "raid_level": "raid0", 00:33:44.103 "superblock": false, 00:33:44.103 "num_base_bdevs": 3, 00:33:44.103 "num_base_bdevs_discovered": 2, 00:33:44.103 "num_base_bdevs_operational": 3, 00:33:44.103 "base_bdevs_list": [ 00:33:44.103 { 00:33:44.103 "name": "BaseBdev1", 00:33:44.103 "uuid": "28442189-09a9-4a32-89ba-3d61c9f7f54a", 00:33:44.103 "is_configured": true, 00:33:44.103 "data_offset": 0, 00:33:44.103 "data_size": 65536 00:33:44.103 }, 00:33:44.103 { 00:33:44.103 "name": null, 00:33:44.103 "uuid": "c9705314-382e-4acd-adcb-9fbfc741f869", 00:33:44.103 "is_configured": false, 00:33:44.103 "data_offset": 0, 00:33:44.103 "data_size": 65536 00:33:44.103 }, 00:33:44.103 { 00:33:44.103 "name": "BaseBdev3", 00:33:44.103 "uuid": "bf39025c-3463-411f-889a-a4eab0d14a25", 00:33:44.103 "is_configured": true, 00:33:44.103 "data_offset": 0, 00:33:44.103 "data_size": 65536 00:33:44.103 } 00:33:44.103 ] 00:33:44.103 }' 00:33:44.103 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:44.103 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.363 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.363 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.363 17:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:44.363 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.363 17:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.363 [2024-11-26 17:30:45.013826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:44.363 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.622 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:44.622 "name": "Existed_Raid", 00:33:44.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.622 "strip_size_kb": 64, 00:33:44.622 "state": "configuring", 00:33:44.622 "raid_level": "raid0", 00:33:44.622 "superblock": false, 00:33:44.622 "num_base_bdevs": 3, 00:33:44.622 "num_base_bdevs_discovered": 1, 00:33:44.622 "num_base_bdevs_operational": 3, 00:33:44.622 "base_bdevs_list": [ 00:33:44.622 { 00:33:44.622 "name": "BaseBdev1", 00:33:44.622 "uuid": "28442189-09a9-4a32-89ba-3d61c9f7f54a", 00:33:44.622 "is_configured": true, 00:33:44.623 "data_offset": 0, 00:33:44.623 "data_size": 65536 00:33:44.623 }, 00:33:44.623 { 00:33:44.623 "name": null, 00:33:44.623 "uuid": "c9705314-382e-4acd-adcb-9fbfc741f869", 00:33:44.623 "is_configured": false, 00:33:44.623 "data_offset": 0, 00:33:44.623 "data_size": 65536 00:33:44.623 }, 00:33:44.623 { 00:33:44.623 "name": null, 00:33:44.623 "uuid": "bf39025c-3463-411f-889a-a4eab0d14a25", 00:33:44.623 "is_configured": false, 00:33:44.623 "data_offset": 0, 00:33:44.623 "data_size": 65536 00:33:44.623 } 00:33:44.623 ] 00:33:44.623 }' 00:33:44.623 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:44.623 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.882 [2024-11-26 17:30:45.528955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.882 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.141 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:45.141 "name": "Existed_Raid", 00:33:45.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.141 "strip_size_kb": 64, 00:33:45.142 "state": "configuring", 00:33:45.142 "raid_level": "raid0", 00:33:45.142 "superblock": false, 00:33:45.142 "num_base_bdevs": 3, 00:33:45.142 "num_base_bdevs_discovered": 2, 00:33:45.142 "num_base_bdevs_operational": 3, 00:33:45.142 "base_bdevs_list": [ 00:33:45.142 { 00:33:45.142 "name": "BaseBdev1", 00:33:45.142 "uuid": "28442189-09a9-4a32-89ba-3d61c9f7f54a", 00:33:45.142 "is_configured": true, 00:33:45.142 "data_offset": 0, 00:33:45.142 "data_size": 65536 00:33:45.142 }, 00:33:45.142 { 00:33:45.142 "name": null, 00:33:45.142 "uuid": "c9705314-382e-4acd-adcb-9fbfc741f869", 00:33:45.142 "is_configured": false, 00:33:45.142 "data_offset": 0, 00:33:45.142 "data_size": 65536 00:33:45.142 }, 00:33:45.142 { 00:33:45.142 "name": "BaseBdev3", 00:33:45.142 "uuid": "bf39025c-3463-411f-889a-a4eab0d14a25", 00:33:45.142 "is_configured": true, 00:33:45.142 "data_offset": 0, 00:33:45.142 "data_size": 65536 00:33:45.142 } 00:33:45.142 ] 00:33:45.142 }' 00:33:45.142 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:45.142 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.401 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:45.401 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.401 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.401 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.401 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.401 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:45.401 17:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:45.402 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.402 17:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.402 [2024-11-26 17:30:45.988250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:45.662 "name": "Existed_Raid", 00:33:45.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.662 "strip_size_kb": 64, 00:33:45.662 "state": "configuring", 00:33:45.662 "raid_level": "raid0", 00:33:45.662 "superblock": false, 00:33:45.662 "num_base_bdevs": 3, 00:33:45.662 "num_base_bdevs_discovered": 1, 00:33:45.662 "num_base_bdevs_operational": 3, 00:33:45.662 "base_bdevs_list": [ 00:33:45.662 { 00:33:45.662 "name": null, 00:33:45.662 "uuid": "28442189-09a9-4a32-89ba-3d61c9f7f54a", 00:33:45.662 "is_configured": false, 00:33:45.662 "data_offset": 0, 00:33:45.662 "data_size": 65536 00:33:45.662 }, 00:33:45.662 { 00:33:45.662 "name": null, 00:33:45.662 "uuid": "c9705314-382e-4acd-adcb-9fbfc741f869", 00:33:45.662 "is_configured": false, 00:33:45.662 "data_offset": 0, 00:33:45.662 "data_size": 65536 00:33:45.662 }, 00:33:45.662 { 00:33:45.662 "name": "BaseBdev3", 00:33:45.662 "uuid": "bf39025c-3463-411f-889a-a4eab0d14a25", 00:33:45.662 "is_configured": true, 00:33:45.662 "data_offset": 0, 00:33:45.662 "data_size": 65536 00:33:45.662 } 00:33:45.662 ] 00:33:45.662 }' 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:45.662 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.922 [2024-11-26 17:30:46.587239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.922 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.181 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.181 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:46.181 "name": "Existed_Raid", 00:33:46.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.181 "strip_size_kb": 64, 00:33:46.181 "state": "configuring", 00:33:46.181 "raid_level": "raid0", 00:33:46.181 "superblock": false, 00:33:46.181 "num_base_bdevs": 3, 00:33:46.181 "num_base_bdevs_discovered": 2, 00:33:46.181 "num_base_bdevs_operational": 3, 00:33:46.181 "base_bdevs_list": [ 00:33:46.181 { 00:33:46.181 "name": null, 00:33:46.181 "uuid": "28442189-09a9-4a32-89ba-3d61c9f7f54a", 00:33:46.181 "is_configured": false, 00:33:46.181 "data_offset": 0, 00:33:46.181 "data_size": 65536 00:33:46.181 }, 00:33:46.181 { 00:33:46.181 "name": "BaseBdev2", 00:33:46.181 "uuid": "c9705314-382e-4acd-adcb-9fbfc741f869", 00:33:46.181 "is_configured": true, 00:33:46.181 "data_offset": 0, 00:33:46.181 "data_size": 65536 00:33:46.181 }, 00:33:46.181 { 00:33:46.181 "name": "BaseBdev3", 00:33:46.181 "uuid": "bf39025c-3463-411f-889a-a4eab0d14a25", 00:33:46.181 "is_configured": true, 00:33:46.181 "data_offset": 0, 00:33:46.181 "data_size": 65536 00:33:46.181 } 00:33:46.181 ] 00:33:46.181 }' 00:33:46.181 17:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:46.181 17:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 28442189-09a9-4a32-89ba-3d61c9f7f54a 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.440 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.699 [2024-11-26 17:30:47.177633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:46.699 [2024-11-26 17:30:47.177679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:46.699 [2024-11-26 17:30:47.177690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:46.699 [2024-11-26 17:30:47.177979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:46.699 [2024-11-26 17:30:47.178155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:46.699 [2024-11-26 17:30:47.178166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:33:46.699 [2024-11-26 17:30:47.178469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:46.699 NewBaseBdev 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.699 [ 00:33:46.699 { 00:33:46.699 "name": "NewBaseBdev", 00:33:46.699 "aliases": [ 00:33:46.699 "28442189-09a9-4a32-89ba-3d61c9f7f54a" 00:33:46.699 ], 00:33:46.699 "product_name": "Malloc disk", 00:33:46.699 "block_size": 512, 00:33:46.699 "num_blocks": 65536, 00:33:46.699 "uuid": "28442189-09a9-4a32-89ba-3d61c9f7f54a", 00:33:46.699 "assigned_rate_limits": { 00:33:46.699 "rw_ios_per_sec": 0, 00:33:46.699 "rw_mbytes_per_sec": 0, 00:33:46.699 "r_mbytes_per_sec": 0, 00:33:46.699 "w_mbytes_per_sec": 0 00:33:46.699 }, 00:33:46.699 "claimed": true, 00:33:46.699 "claim_type": "exclusive_write", 00:33:46.699 "zoned": false, 00:33:46.699 "supported_io_types": { 00:33:46.699 "read": true, 00:33:46.699 "write": true, 00:33:46.699 "unmap": true, 00:33:46.699 "flush": true, 00:33:46.699 "reset": true, 00:33:46.699 "nvme_admin": false, 00:33:46.699 "nvme_io": false, 00:33:46.699 "nvme_io_md": false, 00:33:46.699 "write_zeroes": true, 00:33:46.699 "zcopy": true, 00:33:46.699 "get_zone_info": false, 00:33:46.699 "zone_management": false, 00:33:46.699 "zone_append": false, 00:33:46.699 "compare": false, 00:33:46.699 "compare_and_write": false, 00:33:46.699 "abort": true, 00:33:46.699 "seek_hole": false, 00:33:46.699 "seek_data": false, 00:33:46.699 "copy": true, 00:33:46.699 "nvme_iov_md": false 00:33:46.699 }, 00:33:46.699 "memory_domains": [ 00:33:46.699 { 00:33:46.699 "dma_device_id": "system", 00:33:46.699 "dma_device_type": 1 00:33:46.699 }, 00:33:46.699 { 00:33:46.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:46.699 "dma_device_type": 2 00:33:46.699 } 00:33:46.699 ], 00:33:46.699 "driver_specific": {} 00:33:46.699 } 00:33:46.699 ] 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:46.699 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:46.700 "name": "Existed_Raid", 00:33:46.700 "uuid": "c7251699-c5cb-4b51-bde6-22788b14136c", 00:33:46.700 "strip_size_kb": 64, 00:33:46.700 "state": "online", 00:33:46.700 "raid_level": "raid0", 00:33:46.700 "superblock": false, 00:33:46.700 "num_base_bdevs": 3, 00:33:46.700 "num_base_bdevs_discovered": 3, 00:33:46.700 "num_base_bdevs_operational": 3, 00:33:46.700 "base_bdevs_list": [ 00:33:46.700 { 00:33:46.700 "name": "NewBaseBdev", 00:33:46.700 "uuid": "28442189-09a9-4a32-89ba-3d61c9f7f54a", 00:33:46.700 "is_configured": true, 00:33:46.700 "data_offset": 0, 00:33:46.700 "data_size": 65536 00:33:46.700 }, 00:33:46.700 { 00:33:46.700 "name": "BaseBdev2", 00:33:46.700 "uuid": "c9705314-382e-4acd-adcb-9fbfc741f869", 00:33:46.700 "is_configured": true, 00:33:46.700 "data_offset": 0, 00:33:46.700 "data_size": 65536 00:33:46.700 }, 00:33:46.700 { 00:33:46.700 "name": "BaseBdev3", 00:33:46.700 "uuid": "bf39025c-3463-411f-889a-a4eab0d14a25", 00:33:46.700 "is_configured": true, 00:33:46.700 "data_offset": 0, 00:33:46.700 "data_size": 65536 00:33:46.700 } 00:33:46.700 ] 00:33:46.700 }' 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:46.700 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.958 [2024-11-26 17:30:47.625281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:46.958 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.217 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:47.217 "name": "Existed_Raid", 00:33:47.217 "aliases": [ 00:33:47.217 "c7251699-c5cb-4b51-bde6-22788b14136c" 00:33:47.217 ], 00:33:47.217 "product_name": "Raid Volume", 00:33:47.217 "block_size": 512, 00:33:47.217 "num_blocks": 196608, 00:33:47.217 "uuid": "c7251699-c5cb-4b51-bde6-22788b14136c", 00:33:47.217 "assigned_rate_limits": { 00:33:47.217 "rw_ios_per_sec": 0, 00:33:47.217 "rw_mbytes_per_sec": 0, 00:33:47.217 "r_mbytes_per_sec": 0, 00:33:47.217 "w_mbytes_per_sec": 0 00:33:47.217 }, 00:33:47.217 "claimed": false, 00:33:47.217 "zoned": false, 00:33:47.217 "supported_io_types": { 00:33:47.217 "read": true, 00:33:47.217 "write": true, 00:33:47.217 "unmap": true, 00:33:47.217 "flush": true, 00:33:47.217 "reset": true, 00:33:47.217 "nvme_admin": false, 00:33:47.217 "nvme_io": false, 00:33:47.217 "nvme_io_md": false, 00:33:47.217 "write_zeroes": true, 00:33:47.217 "zcopy": false, 00:33:47.217 "get_zone_info": false, 00:33:47.217 "zone_management": false, 00:33:47.217 "zone_append": false, 00:33:47.217 "compare": false, 00:33:47.217 "compare_and_write": false, 00:33:47.217 "abort": false, 00:33:47.217 "seek_hole": false, 00:33:47.217 "seek_data": false, 00:33:47.217 "copy": false, 00:33:47.217 "nvme_iov_md": false 00:33:47.217 }, 00:33:47.217 "memory_domains": [ 00:33:47.217 { 00:33:47.217 "dma_device_id": "system", 00:33:47.217 "dma_device_type": 1 00:33:47.217 }, 00:33:47.217 { 00:33:47.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.217 "dma_device_type": 2 00:33:47.217 }, 00:33:47.217 { 00:33:47.217 "dma_device_id": "system", 00:33:47.217 "dma_device_type": 1 00:33:47.217 }, 00:33:47.217 { 00:33:47.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.217 "dma_device_type": 2 00:33:47.217 }, 00:33:47.217 { 00:33:47.217 "dma_device_id": "system", 00:33:47.217 "dma_device_type": 1 00:33:47.217 }, 00:33:47.217 { 00:33:47.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.217 "dma_device_type": 2 00:33:47.217 } 00:33:47.217 ], 00:33:47.217 "driver_specific": { 00:33:47.217 "raid": { 00:33:47.217 "uuid": "c7251699-c5cb-4b51-bde6-22788b14136c", 00:33:47.217 "strip_size_kb": 64, 00:33:47.217 "state": "online", 00:33:47.217 "raid_level": "raid0", 00:33:47.217 "superblock": false, 00:33:47.217 "num_base_bdevs": 3, 00:33:47.217 "num_base_bdevs_discovered": 3, 00:33:47.217 "num_base_bdevs_operational": 3, 00:33:47.217 "base_bdevs_list": [ 00:33:47.217 { 00:33:47.217 "name": "NewBaseBdev", 00:33:47.217 "uuid": "28442189-09a9-4a32-89ba-3d61c9f7f54a", 00:33:47.217 "is_configured": true, 00:33:47.217 "data_offset": 0, 00:33:47.217 "data_size": 65536 00:33:47.217 }, 00:33:47.217 { 00:33:47.217 "name": "BaseBdev2", 00:33:47.217 "uuid": "c9705314-382e-4acd-adcb-9fbfc741f869", 00:33:47.217 "is_configured": true, 00:33:47.217 "data_offset": 0, 00:33:47.217 "data_size": 65536 00:33:47.217 }, 00:33:47.217 { 00:33:47.217 "name": "BaseBdev3", 00:33:47.217 "uuid": "bf39025c-3463-411f-889a-a4eab0d14a25", 00:33:47.217 "is_configured": true, 00:33:47.217 "data_offset": 0, 00:33:47.217 "data_size": 65536 00:33:47.217 } 00:33:47.217 ] 00:33:47.217 } 00:33:47.217 } 00:33:47.217 }' 00:33:47.217 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:47.217 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:47.217 BaseBdev2 00:33:47.217 BaseBdev3' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.218 [2024-11-26 17:30:47.896485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:47.218 [2024-11-26 17:30:47.896520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:47.218 [2024-11-26 17:30:47.896618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:47.218 [2024-11-26 17:30:47.896680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:47.218 [2024-11-26 17:30:47.896694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64059 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64059 ']' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64059 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:47.218 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64059 00:33:47.476 killing process with pid 64059 00:33:47.477 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:47.477 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:47.477 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64059' 00:33:47.477 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64059 00:33:47.477 17:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64059 00:33:47.477 [2024-11-26 17:30:47.941838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:47.735 [2024-11-26 17:30:48.277840] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:49.111 ************************************ 00:33:49.111 END TEST raid_state_function_test 00:33:49.111 ************************************ 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:33:49.111 00:33:49.111 real 0m10.769s 00:33:49.111 user 0m17.009s 00:33:49.111 sys 0m1.850s 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:49.111 17:30:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:33:49.111 17:30:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:49.111 17:30:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.111 17:30:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:49.111 ************************************ 00:33:49.111 START TEST raid_state_function_test_sb 00:33:49.111 ************************************ 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:33:49.111 Process raid pid: 64680 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64680 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64680' 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64680 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64680 ']' 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.111 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:49.111 [2024-11-26 17:30:49.635168] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:49.111 [2024-11-26 17:30:49.635418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.375 [2024-11-26 17:30:49.810306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.375 [2024-11-26 17:30:49.935997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.638 [2024-11-26 17:30:50.158005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:49.638 [2024-11-26 17:30:50.158054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:49.898 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:49.898 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:33:49.898 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:49.898 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.898 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:49.898 [2024-11-26 17:30:50.521602] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:49.898 [2024-11-26 17:30:50.521665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:49.899 [2024-11-26 17:30:50.521682] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:49.899 [2024-11-26 17:30:50.521694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:49.899 [2024-11-26 17:30:50.521702] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:49.899 [2024-11-26 17:30:50.521712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:49.899 "name": "Existed_Raid", 00:33:49.899 "uuid": "a5f79ea1-bac4-4842-b5bb-bcc0c4894886", 00:33:49.899 "strip_size_kb": 64, 00:33:49.899 "state": "configuring", 00:33:49.899 "raid_level": "raid0", 00:33:49.899 "superblock": true, 00:33:49.899 "num_base_bdevs": 3, 00:33:49.899 "num_base_bdevs_discovered": 0, 00:33:49.899 "num_base_bdevs_operational": 3, 00:33:49.899 "base_bdevs_list": [ 00:33:49.899 { 00:33:49.899 "name": "BaseBdev1", 00:33:49.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.899 "is_configured": false, 00:33:49.899 "data_offset": 0, 00:33:49.899 "data_size": 0 00:33:49.899 }, 00:33:49.899 { 00:33:49.899 "name": "BaseBdev2", 00:33:49.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.899 "is_configured": false, 00:33:49.899 "data_offset": 0, 00:33:49.899 "data_size": 0 00:33:49.899 }, 00:33:49.899 { 00:33:49.899 "name": "BaseBdev3", 00:33:49.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.899 "is_configured": false, 00:33:49.899 "data_offset": 0, 00:33:49.899 "data_size": 0 00:33:49.899 } 00:33:49.899 ] 00:33:49.899 }' 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:49.899 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:50.470 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:50.470 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.470 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:50.470 [2024-11-26 17:30:50.992717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:50.470 [2024-11-26 17:30:50.992757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:50.470 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.470 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:50.470 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.470 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:50.470 [2024-11-26 17:30:51.000721] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:50.470 [2024-11-26 17:30:51.000773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:50.470 [2024-11-26 17:30:51.000784] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:50.470 [2024-11-26 17:30:51.000794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:50.470 [2024-11-26 17:30:51.000801] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:50.470 [2024-11-26 17:30:51.000811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:50.470 [2024-11-26 17:30:51.046377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:50.470 BaseBdev1 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.470 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:50.470 [ 00:33:50.470 { 00:33:50.470 "name": "BaseBdev1", 00:33:50.470 "aliases": [ 00:33:50.470 "4756dcea-d59d-4a7b-874e-ce3f09a4342f" 00:33:50.470 ], 00:33:50.470 "product_name": "Malloc disk", 00:33:50.470 "block_size": 512, 00:33:50.470 "num_blocks": 65536, 00:33:50.470 "uuid": "4756dcea-d59d-4a7b-874e-ce3f09a4342f", 00:33:50.470 "assigned_rate_limits": { 00:33:50.470 "rw_ios_per_sec": 0, 00:33:50.470 "rw_mbytes_per_sec": 0, 00:33:50.470 "r_mbytes_per_sec": 0, 00:33:50.470 "w_mbytes_per_sec": 0 00:33:50.470 }, 00:33:50.470 "claimed": true, 00:33:50.470 "claim_type": "exclusive_write", 00:33:50.470 "zoned": false, 00:33:50.470 "supported_io_types": { 00:33:50.470 "read": true, 00:33:50.470 "write": true, 00:33:50.470 "unmap": true, 00:33:50.470 "flush": true, 00:33:50.470 "reset": true, 00:33:50.470 "nvme_admin": false, 00:33:50.470 "nvme_io": false, 00:33:50.470 "nvme_io_md": false, 00:33:50.470 "write_zeroes": true, 00:33:50.470 "zcopy": true, 00:33:50.470 "get_zone_info": false, 00:33:50.470 "zone_management": false, 00:33:50.470 "zone_append": false, 00:33:50.470 "compare": false, 00:33:50.470 "compare_and_write": false, 00:33:50.470 "abort": true, 00:33:50.470 "seek_hole": false, 00:33:50.470 "seek_data": false, 00:33:50.470 "copy": true, 00:33:50.470 "nvme_iov_md": false 00:33:50.470 }, 00:33:50.470 "memory_domains": [ 00:33:50.470 { 00:33:50.470 "dma_device_id": "system", 00:33:50.470 "dma_device_type": 1 00:33:50.470 }, 00:33:50.470 { 00:33:50.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:50.470 "dma_device_type": 2 00:33:50.470 } 00:33:50.470 ], 00:33:50.470 "driver_specific": {} 00:33:50.470 } 00:33:50.471 ] 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:50.471 "name": "Existed_Raid", 00:33:50.471 "uuid": "3a89df9b-a3a7-4f03-8ccd-49f240791078", 00:33:50.471 "strip_size_kb": 64, 00:33:50.471 "state": "configuring", 00:33:50.471 "raid_level": "raid0", 00:33:50.471 "superblock": true, 00:33:50.471 "num_base_bdevs": 3, 00:33:50.471 "num_base_bdevs_discovered": 1, 00:33:50.471 "num_base_bdevs_operational": 3, 00:33:50.471 "base_bdevs_list": [ 00:33:50.471 { 00:33:50.471 "name": "BaseBdev1", 00:33:50.471 "uuid": "4756dcea-d59d-4a7b-874e-ce3f09a4342f", 00:33:50.471 "is_configured": true, 00:33:50.471 "data_offset": 2048, 00:33:50.471 "data_size": 63488 00:33:50.471 }, 00:33:50.471 { 00:33:50.471 "name": "BaseBdev2", 00:33:50.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.471 "is_configured": false, 00:33:50.471 "data_offset": 0, 00:33:50.471 "data_size": 0 00:33:50.471 }, 00:33:50.471 { 00:33:50.471 "name": "BaseBdev3", 00:33:50.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.471 "is_configured": false, 00:33:50.471 "data_offset": 0, 00:33:50.471 "data_size": 0 00:33:50.471 } 00:33:50.471 ] 00:33:50.471 }' 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:50.471 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.039 [2024-11-26 17:30:51.509691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:51.039 [2024-11-26 17:30:51.509757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.039 [2024-11-26 17:30:51.521750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:51.039 [2024-11-26 17:30:51.523979] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:51.039 [2024-11-26 17:30:51.524067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:51.039 [2024-11-26 17:30:51.524110] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:51.039 [2024-11-26 17:30:51.524151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:51.039 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:51.040 "name": "Existed_Raid", 00:33:51.040 "uuid": "570c6812-8f9d-4bb9-8b08-dcdf0c17c0f0", 00:33:51.040 "strip_size_kb": 64, 00:33:51.040 "state": "configuring", 00:33:51.040 "raid_level": "raid0", 00:33:51.040 "superblock": true, 00:33:51.040 "num_base_bdevs": 3, 00:33:51.040 "num_base_bdevs_discovered": 1, 00:33:51.040 "num_base_bdevs_operational": 3, 00:33:51.040 "base_bdevs_list": [ 00:33:51.040 { 00:33:51.040 "name": "BaseBdev1", 00:33:51.040 "uuid": "4756dcea-d59d-4a7b-874e-ce3f09a4342f", 00:33:51.040 "is_configured": true, 00:33:51.040 "data_offset": 2048, 00:33:51.040 "data_size": 63488 00:33:51.040 }, 00:33:51.040 { 00:33:51.040 "name": "BaseBdev2", 00:33:51.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.040 "is_configured": false, 00:33:51.040 "data_offset": 0, 00:33:51.040 "data_size": 0 00:33:51.040 }, 00:33:51.040 { 00:33:51.040 "name": "BaseBdev3", 00:33:51.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.040 "is_configured": false, 00:33:51.040 "data_offset": 0, 00:33:51.040 "data_size": 0 00:33:51.040 } 00:33:51.040 ] 00:33:51.040 }' 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:51.040 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.299 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:51.299 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.299 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.299 [2024-11-26 17:30:51.989949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:51.299 BaseBdev2 00:33:51.299 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.299 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:51.559 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:51.559 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:51.559 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:51.559 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:51.559 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:51.559 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:51.559 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.559 17:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.559 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.559 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.560 [ 00:33:51.560 { 00:33:51.560 "name": "BaseBdev2", 00:33:51.560 "aliases": [ 00:33:51.560 "fe3f41a6-9cc1-4cc5-b678-4b3122f92d24" 00:33:51.560 ], 00:33:51.560 "product_name": "Malloc disk", 00:33:51.560 "block_size": 512, 00:33:51.560 "num_blocks": 65536, 00:33:51.560 "uuid": "fe3f41a6-9cc1-4cc5-b678-4b3122f92d24", 00:33:51.560 "assigned_rate_limits": { 00:33:51.560 "rw_ios_per_sec": 0, 00:33:51.560 "rw_mbytes_per_sec": 0, 00:33:51.560 "r_mbytes_per_sec": 0, 00:33:51.560 "w_mbytes_per_sec": 0 00:33:51.560 }, 00:33:51.560 "claimed": true, 00:33:51.560 "claim_type": "exclusive_write", 00:33:51.560 "zoned": false, 00:33:51.560 "supported_io_types": { 00:33:51.560 "read": true, 00:33:51.560 "write": true, 00:33:51.560 "unmap": true, 00:33:51.560 "flush": true, 00:33:51.560 "reset": true, 00:33:51.560 "nvme_admin": false, 00:33:51.560 "nvme_io": false, 00:33:51.560 "nvme_io_md": false, 00:33:51.560 "write_zeroes": true, 00:33:51.560 "zcopy": true, 00:33:51.560 "get_zone_info": false, 00:33:51.560 "zone_management": false, 00:33:51.560 "zone_append": false, 00:33:51.560 "compare": false, 00:33:51.560 "compare_and_write": false, 00:33:51.560 "abort": true, 00:33:51.560 "seek_hole": false, 00:33:51.560 "seek_data": false, 00:33:51.560 "copy": true, 00:33:51.560 "nvme_iov_md": false 00:33:51.560 }, 00:33:51.560 "memory_domains": [ 00:33:51.560 { 00:33:51.560 "dma_device_id": "system", 00:33:51.560 "dma_device_type": 1 00:33:51.560 }, 00:33:51.560 { 00:33:51.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:51.560 "dma_device_type": 2 00:33:51.560 } 00:33:51.560 ], 00:33:51.560 "driver_specific": {} 00:33:51.560 } 00:33:51.560 ] 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:51.560 "name": "Existed_Raid", 00:33:51.560 "uuid": "570c6812-8f9d-4bb9-8b08-dcdf0c17c0f0", 00:33:51.560 "strip_size_kb": 64, 00:33:51.560 "state": "configuring", 00:33:51.560 "raid_level": "raid0", 00:33:51.560 "superblock": true, 00:33:51.560 "num_base_bdevs": 3, 00:33:51.560 "num_base_bdevs_discovered": 2, 00:33:51.560 "num_base_bdevs_operational": 3, 00:33:51.560 "base_bdevs_list": [ 00:33:51.560 { 00:33:51.560 "name": "BaseBdev1", 00:33:51.560 "uuid": "4756dcea-d59d-4a7b-874e-ce3f09a4342f", 00:33:51.560 "is_configured": true, 00:33:51.560 "data_offset": 2048, 00:33:51.560 "data_size": 63488 00:33:51.560 }, 00:33:51.560 { 00:33:51.560 "name": "BaseBdev2", 00:33:51.560 "uuid": "fe3f41a6-9cc1-4cc5-b678-4b3122f92d24", 00:33:51.560 "is_configured": true, 00:33:51.560 "data_offset": 2048, 00:33:51.560 "data_size": 63488 00:33:51.560 }, 00:33:51.560 { 00:33:51.560 "name": "BaseBdev3", 00:33:51.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.560 "is_configured": false, 00:33:51.560 "data_offset": 0, 00:33:51.560 "data_size": 0 00:33:51.560 } 00:33:51.560 ] 00:33:51.560 }' 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:51.560 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.820 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:51.820 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.821 [2024-11-26 17:30:52.458271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:51.821 [2024-11-26 17:30:52.458746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:51.821 [2024-11-26 17:30:52.458774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:51.821 BaseBdev3 00:33:51.821 [2024-11-26 17:30:52.459105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:51.821 [2024-11-26 17:30:52.459281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:51.821 [2024-11-26 17:30:52.459294] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:51.821 [2024-11-26 17:30:52.459460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:51.821 [ 00:33:51.821 { 00:33:51.821 "name": "BaseBdev3", 00:33:51.821 "aliases": [ 00:33:51.821 "e5053f34-d49e-4034-b723-dee21f00668d" 00:33:51.821 ], 00:33:51.821 "product_name": "Malloc disk", 00:33:51.821 "block_size": 512, 00:33:51.821 "num_blocks": 65536, 00:33:51.821 "uuid": "e5053f34-d49e-4034-b723-dee21f00668d", 00:33:51.821 "assigned_rate_limits": { 00:33:51.821 "rw_ios_per_sec": 0, 00:33:51.821 "rw_mbytes_per_sec": 0, 00:33:51.821 "r_mbytes_per_sec": 0, 00:33:51.821 "w_mbytes_per_sec": 0 00:33:51.821 }, 00:33:51.821 "claimed": true, 00:33:51.821 "claim_type": "exclusive_write", 00:33:51.821 "zoned": false, 00:33:51.821 "supported_io_types": { 00:33:51.821 "read": true, 00:33:51.821 "write": true, 00:33:51.821 "unmap": true, 00:33:51.821 "flush": true, 00:33:51.821 "reset": true, 00:33:51.821 "nvme_admin": false, 00:33:51.821 "nvme_io": false, 00:33:51.821 "nvme_io_md": false, 00:33:51.821 "write_zeroes": true, 00:33:51.821 "zcopy": true, 00:33:51.821 "get_zone_info": false, 00:33:51.821 "zone_management": false, 00:33:51.821 "zone_append": false, 00:33:51.821 "compare": false, 00:33:51.821 "compare_and_write": false, 00:33:51.821 "abort": true, 00:33:51.821 "seek_hole": false, 00:33:51.821 "seek_data": false, 00:33:51.821 "copy": true, 00:33:51.821 "nvme_iov_md": false 00:33:51.821 }, 00:33:51.821 "memory_domains": [ 00:33:51.821 { 00:33:51.821 "dma_device_id": "system", 00:33:51.821 "dma_device_type": 1 00:33:51.821 }, 00:33:51.821 { 00:33:51.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:51.821 "dma_device_type": 2 00:33:51.821 } 00:33:51.821 ], 00:33:51.821 "driver_specific": {} 00:33:51.821 } 00:33:51.821 ] 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.821 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.079 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.079 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:52.079 "name": "Existed_Raid", 00:33:52.079 "uuid": "570c6812-8f9d-4bb9-8b08-dcdf0c17c0f0", 00:33:52.079 "strip_size_kb": 64, 00:33:52.079 "state": "online", 00:33:52.079 "raid_level": "raid0", 00:33:52.079 "superblock": true, 00:33:52.079 "num_base_bdevs": 3, 00:33:52.079 "num_base_bdevs_discovered": 3, 00:33:52.079 "num_base_bdevs_operational": 3, 00:33:52.079 "base_bdevs_list": [ 00:33:52.079 { 00:33:52.079 "name": "BaseBdev1", 00:33:52.079 "uuid": "4756dcea-d59d-4a7b-874e-ce3f09a4342f", 00:33:52.079 "is_configured": true, 00:33:52.079 "data_offset": 2048, 00:33:52.079 "data_size": 63488 00:33:52.079 }, 00:33:52.079 { 00:33:52.079 "name": "BaseBdev2", 00:33:52.079 "uuid": "fe3f41a6-9cc1-4cc5-b678-4b3122f92d24", 00:33:52.079 "is_configured": true, 00:33:52.079 "data_offset": 2048, 00:33:52.079 "data_size": 63488 00:33:52.079 }, 00:33:52.079 { 00:33:52.079 "name": "BaseBdev3", 00:33:52.080 "uuid": "e5053f34-d49e-4034-b723-dee21f00668d", 00:33:52.080 "is_configured": true, 00:33:52.080 "data_offset": 2048, 00:33:52.080 "data_size": 63488 00:33:52.080 } 00:33:52.080 ] 00:33:52.080 }' 00:33:52.080 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:52.080 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:52.337 [2024-11-26 17:30:52.957883] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.337 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:52.337 "name": "Existed_Raid", 00:33:52.337 "aliases": [ 00:33:52.337 "570c6812-8f9d-4bb9-8b08-dcdf0c17c0f0" 00:33:52.337 ], 00:33:52.337 "product_name": "Raid Volume", 00:33:52.337 "block_size": 512, 00:33:52.337 "num_blocks": 190464, 00:33:52.338 "uuid": "570c6812-8f9d-4bb9-8b08-dcdf0c17c0f0", 00:33:52.338 "assigned_rate_limits": { 00:33:52.338 "rw_ios_per_sec": 0, 00:33:52.338 "rw_mbytes_per_sec": 0, 00:33:52.338 "r_mbytes_per_sec": 0, 00:33:52.338 "w_mbytes_per_sec": 0 00:33:52.338 }, 00:33:52.338 "claimed": false, 00:33:52.338 "zoned": false, 00:33:52.338 "supported_io_types": { 00:33:52.338 "read": true, 00:33:52.338 "write": true, 00:33:52.338 "unmap": true, 00:33:52.338 "flush": true, 00:33:52.338 "reset": true, 00:33:52.338 "nvme_admin": false, 00:33:52.338 "nvme_io": false, 00:33:52.338 "nvme_io_md": false, 00:33:52.338 "write_zeroes": true, 00:33:52.338 "zcopy": false, 00:33:52.338 "get_zone_info": false, 00:33:52.338 "zone_management": false, 00:33:52.338 "zone_append": false, 00:33:52.338 "compare": false, 00:33:52.338 "compare_and_write": false, 00:33:52.338 "abort": false, 00:33:52.338 "seek_hole": false, 00:33:52.338 "seek_data": false, 00:33:52.338 "copy": false, 00:33:52.338 "nvme_iov_md": false 00:33:52.338 }, 00:33:52.338 "memory_domains": [ 00:33:52.338 { 00:33:52.338 "dma_device_id": "system", 00:33:52.338 "dma_device_type": 1 00:33:52.338 }, 00:33:52.338 { 00:33:52.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:52.338 "dma_device_type": 2 00:33:52.338 }, 00:33:52.338 { 00:33:52.338 "dma_device_id": "system", 00:33:52.338 "dma_device_type": 1 00:33:52.338 }, 00:33:52.338 { 00:33:52.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:52.338 "dma_device_type": 2 00:33:52.338 }, 00:33:52.338 { 00:33:52.338 "dma_device_id": "system", 00:33:52.338 "dma_device_type": 1 00:33:52.338 }, 00:33:52.338 { 00:33:52.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:52.338 "dma_device_type": 2 00:33:52.338 } 00:33:52.338 ], 00:33:52.338 "driver_specific": { 00:33:52.338 "raid": { 00:33:52.338 "uuid": "570c6812-8f9d-4bb9-8b08-dcdf0c17c0f0", 00:33:52.338 "strip_size_kb": 64, 00:33:52.338 "state": "online", 00:33:52.338 "raid_level": "raid0", 00:33:52.338 "superblock": true, 00:33:52.338 "num_base_bdevs": 3, 00:33:52.338 "num_base_bdevs_discovered": 3, 00:33:52.338 "num_base_bdevs_operational": 3, 00:33:52.338 "base_bdevs_list": [ 00:33:52.338 { 00:33:52.338 "name": "BaseBdev1", 00:33:52.338 "uuid": "4756dcea-d59d-4a7b-874e-ce3f09a4342f", 00:33:52.338 "is_configured": true, 00:33:52.338 "data_offset": 2048, 00:33:52.338 "data_size": 63488 00:33:52.338 }, 00:33:52.338 { 00:33:52.338 "name": "BaseBdev2", 00:33:52.338 "uuid": "fe3f41a6-9cc1-4cc5-b678-4b3122f92d24", 00:33:52.338 "is_configured": true, 00:33:52.338 "data_offset": 2048, 00:33:52.338 "data_size": 63488 00:33:52.338 }, 00:33:52.338 { 00:33:52.338 "name": "BaseBdev3", 00:33:52.338 "uuid": "e5053f34-d49e-4034-b723-dee21f00668d", 00:33:52.338 "is_configured": true, 00:33:52.338 "data_offset": 2048, 00:33:52.338 "data_size": 63488 00:33:52.338 } 00:33:52.338 ] 00:33:52.338 } 00:33:52.338 } 00:33:52.338 }' 00:33:52.338 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:52.595 BaseBdev2 00:33:52.595 BaseBdev3' 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:52.595 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.596 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.596 [2024-11-26 17:30:53.217108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:52.596 [2024-11-26 17:30:53.217139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:52.596 [2024-11-26 17:30:53.217194] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:52.854 "name": "Existed_Raid", 00:33:52.854 "uuid": "570c6812-8f9d-4bb9-8b08-dcdf0c17c0f0", 00:33:52.854 "strip_size_kb": 64, 00:33:52.854 "state": "offline", 00:33:52.854 "raid_level": "raid0", 00:33:52.854 "superblock": true, 00:33:52.854 "num_base_bdevs": 3, 00:33:52.854 "num_base_bdevs_discovered": 2, 00:33:52.854 "num_base_bdevs_operational": 2, 00:33:52.854 "base_bdevs_list": [ 00:33:52.854 { 00:33:52.854 "name": null, 00:33:52.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:52.854 "is_configured": false, 00:33:52.854 "data_offset": 0, 00:33:52.854 "data_size": 63488 00:33:52.854 }, 00:33:52.854 { 00:33:52.854 "name": "BaseBdev2", 00:33:52.854 "uuid": "fe3f41a6-9cc1-4cc5-b678-4b3122f92d24", 00:33:52.854 "is_configured": true, 00:33:52.854 "data_offset": 2048, 00:33:52.854 "data_size": 63488 00:33:52.854 }, 00:33:52.854 { 00:33:52.854 "name": "BaseBdev3", 00:33:52.854 "uuid": "e5053f34-d49e-4034-b723-dee21f00668d", 00:33:52.854 "is_configured": true, 00:33:52.854 "data_offset": 2048, 00:33:52.854 "data_size": 63488 00:33:52.854 } 00:33:52.854 ] 00:33:52.854 }' 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:52.854 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.112 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.112 [2024-11-26 17:30:53.745195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.370 [2024-11-26 17:30:53.883599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:53.370 [2024-11-26 17:30:53.883700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:53.370 17:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.370 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:53.370 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:53.370 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:33:53.370 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:53.370 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:53.370 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:53.370 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.370 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.639 BaseBdev2 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.639 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.639 [ 00:33:53.639 { 00:33:53.639 "name": "BaseBdev2", 00:33:53.639 "aliases": [ 00:33:53.639 "ad514138-573a-4944-8ce1-50911c78b859" 00:33:53.639 ], 00:33:53.639 "product_name": "Malloc disk", 00:33:53.639 "block_size": 512, 00:33:53.640 "num_blocks": 65536, 00:33:53.640 "uuid": "ad514138-573a-4944-8ce1-50911c78b859", 00:33:53.640 "assigned_rate_limits": { 00:33:53.640 "rw_ios_per_sec": 0, 00:33:53.640 "rw_mbytes_per_sec": 0, 00:33:53.640 "r_mbytes_per_sec": 0, 00:33:53.640 "w_mbytes_per_sec": 0 00:33:53.640 }, 00:33:53.640 "claimed": false, 00:33:53.640 "zoned": false, 00:33:53.640 "supported_io_types": { 00:33:53.640 "read": true, 00:33:53.640 "write": true, 00:33:53.640 "unmap": true, 00:33:53.640 "flush": true, 00:33:53.640 "reset": true, 00:33:53.640 "nvme_admin": false, 00:33:53.640 "nvme_io": false, 00:33:53.640 "nvme_io_md": false, 00:33:53.640 "write_zeroes": true, 00:33:53.640 "zcopy": true, 00:33:53.640 "get_zone_info": false, 00:33:53.640 "zone_management": false, 00:33:53.640 "zone_append": false, 00:33:53.640 "compare": false, 00:33:53.640 "compare_and_write": false, 00:33:53.640 "abort": true, 00:33:53.640 "seek_hole": false, 00:33:53.640 "seek_data": false, 00:33:53.640 "copy": true, 00:33:53.640 "nvme_iov_md": false 00:33:53.640 }, 00:33:53.640 "memory_domains": [ 00:33:53.640 { 00:33:53.640 "dma_device_id": "system", 00:33:53.640 "dma_device_type": 1 00:33:53.640 }, 00:33:53.640 { 00:33:53.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:53.640 "dma_device_type": 2 00:33:53.640 } 00:33:53.640 ], 00:33:53.640 "driver_specific": {} 00:33:53.640 } 00:33:53.640 ] 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.640 BaseBdev3 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.640 [ 00:33:53.640 { 00:33:53.640 "name": "BaseBdev3", 00:33:53.640 "aliases": [ 00:33:53.640 "8082b6d6-7fc3-4970-9666-406d98783633" 00:33:53.640 ], 00:33:53.640 "product_name": "Malloc disk", 00:33:53.640 "block_size": 512, 00:33:53.640 "num_blocks": 65536, 00:33:53.640 "uuid": "8082b6d6-7fc3-4970-9666-406d98783633", 00:33:53.640 "assigned_rate_limits": { 00:33:53.640 "rw_ios_per_sec": 0, 00:33:53.640 "rw_mbytes_per_sec": 0, 00:33:53.640 "r_mbytes_per_sec": 0, 00:33:53.640 "w_mbytes_per_sec": 0 00:33:53.640 }, 00:33:53.640 "claimed": false, 00:33:53.640 "zoned": false, 00:33:53.640 "supported_io_types": { 00:33:53.640 "read": true, 00:33:53.640 "write": true, 00:33:53.640 "unmap": true, 00:33:53.640 "flush": true, 00:33:53.640 "reset": true, 00:33:53.640 "nvme_admin": false, 00:33:53.640 "nvme_io": false, 00:33:53.640 "nvme_io_md": false, 00:33:53.640 "write_zeroes": true, 00:33:53.640 "zcopy": true, 00:33:53.640 "get_zone_info": false, 00:33:53.640 "zone_management": false, 00:33:53.640 "zone_append": false, 00:33:53.640 "compare": false, 00:33:53.640 "compare_and_write": false, 00:33:53.640 "abort": true, 00:33:53.640 "seek_hole": false, 00:33:53.640 "seek_data": false, 00:33:53.640 "copy": true, 00:33:53.640 "nvme_iov_md": false 00:33:53.640 }, 00:33:53.640 "memory_domains": [ 00:33:53.640 { 00:33:53.640 "dma_device_id": "system", 00:33:53.640 "dma_device_type": 1 00:33:53.640 }, 00:33:53.640 { 00:33:53.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:53.640 "dma_device_type": 2 00:33:53.640 } 00:33:53.640 ], 00:33:53.640 "driver_specific": {} 00:33:53.640 } 00:33:53.640 ] 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.640 [2024-11-26 17:30:54.195916] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:53.640 [2024-11-26 17:30:54.195971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:53.640 [2024-11-26 17:30:54.195999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:53.640 [2024-11-26 17:30:54.198044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:53.640 "name": "Existed_Raid", 00:33:53.640 "uuid": "d2ed9ba1-1ab3-49f7-9918-8085606c6f99", 00:33:53.640 "strip_size_kb": 64, 00:33:53.640 "state": "configuring", 00:33:53.640 "raid_level": "raid0", 00:33:53.640 "superblock": true, 00:33:53.640 "num_base_bdevs": 3, 00:33:53.640 "num_base_bdevs_discovered": 2, 00:33:53.640 "num_base_bdevs_operational": 3, 00:33:53.640 "base_bdevs_list": [ 00:33:53.640 { 00:33:53.640 "name": "BaseBdev1", 00:33:53.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:53.640 "is_configured": false, 00:33:53.640 "data_offset": 0, 00:33:53.640 "data_size": 0 00:33:53.640 }, 00:33:53.640 { 00:33:53.640 "name": "BaseBdev2", 00:33:53.640 "uuid": "ad514138-573a-4944-8ce1-50911c78b859", 00:33:53.640 "is_configured": true, 00:33:53.640 "data_offset": 2048, 00:33:53.640 "data_size": 63488 00:33:53.640 }, 00:33:53.640 { 00:33:53.640 "name": "BaseBdev3", 00:33:53.640 "uuid": "8082b6d6-7fc3-4970-9666-406d98783633", 00:33:53.640 "is_configured": true, 00:33:53.640 "data_offset": 2048, 00:33:53.640 "data_size": 63488 00:33:53.640 } 00:33:53.640 ] 00:33:53.640 }' 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:53.640 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.206 [2024-11-26 17:30:54.655355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:54.206 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:54.207 "name": "Existed_Raid", 00:33:54.207 "uuid": "d2ed9ba1-1ab3-49f7-9918-8085606c6f99", 00:33:54.207 "strip_size_kb": 64, 00:33:54.207 "state": "configuring", 00:33:54.207 "raid_level": "raid0", 00:33:54.207 "superblock": true, 00:33:54.207 "num_base_bdevs": 3, 00:33:54.207 "num_base_bdevs_discovered": 1, 00:33:54.207 "num_base_bdevs_operational": 3, 00:33:54.207 "base_bdevs_list": [ 00:33:54.207 { 00:33:54.207 "name": "BaseBdev1", 00:33:54.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.207 "is_configured": false, 00:33:54.207 "data_offset": 0, 00:33:54.207 "data_size": 0 00:33:54.207 }, 00:33:54.207 { 00:33:54.207 "name": null, 00:33:54.207 "uuid": "ad514138-573a-4944-8ce1-50911c78b859", 00:33:54.207 "is_configured": false, 00:33:54.207 "data_offset": 0, 00:33:54.207 "data_size": 63488 00:33:54.207 }, 00:33:54.207 { 00:33:54.207 "name": "BaseBdev3", 00:33:54.207 "uuid": "8082b6d6-7fc3-4970-9666-406d98783633", 00:33:54.207 "is_configured": true, 00:33:54.207 "data_offset": 2048, 00:33:54.207 "data_size": 63488 00:33:54.207 } 00:33:54.207 ] 00:33:54.207 }' 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:54.207 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.464 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:54.464 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.464 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.464 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.464 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.464 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:54.464 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:54.464 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.464 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.721 [2024-11-26 17:30:55.175952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:54.721 BaseBdev1 00:33:54.721 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.721 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.722 [ 00:33:54.722 { 00:33:54.722 "name": "BaseBdev1", 00:33:54.722 "aliases": [ 00:33:54.722 "95492a73-369f-45f5-9bcd-1924d304f1f8" 00:33:54.722 ], 00:33:54.722 "product_name": "Malloc disk", 00:33:54.722 "block_size": 512, 00:33:54.722 "num_blocks": 65536, 00:33:54.722 "uuid": "95492a73-369f-45f5-9bcd-1924d304f1f8", 00:33:54.722 "assigned_rate_limits": { 00:33:54.722 "rw_ios_per_sec": 0, 00:33:54.722 "rw_mbytes_per_sec": 0, 00:33:54.722 "r_mbytes_per_sec": 0, 00:33:54.722 "w_mbytes_per_sec": 0 00:33:54.722 }, 00:33:54.722 "claimed": true, 00:33:54.722 "claim_type": "exclusive_write", 00:33:54.722 "zoned": false, 00:33:54.722 "supported_io_types": { 00:33:54.722 "read": true, 00:33:54.722 "write": true, 00:33:54.722 "unmap": true, 00:33:54.722 "flush": true, 00:33:54.722 "reset": true, 00:33:54.722 "nvme_admin": false, 00:33:54.722 "nvme_io": false, 00:33:54.722 "nvme_io_md": false, 00:33:54.722 "write_zeroes": true, 00:33:54.722 "zcopy": true, 00:33:54.722 "get_zone_info": false, 00:33:54.722 "zone_management": false, 00:33:54.722 "zone_append": false, 00:33:54.722 "compare": false, 00:33:54.722 "compare_and_write": false, 00:33:54.722 "abort": true, 00:33:54.722 "seek_hole": false, 00:33:54.722 "seek_data": false, 00:33:54.722 "copy": true, 00:33:54.722 "nvme_iov_md": false 00:33:54.722 }, 00:33:54.722 "memory_domains": [ 00:33:54.722 { 00:33:54.722 "dma_device_id": "system", 00:33:54.722 "dma_device_type": 1 00:33:54.722 }, 00:33:54.722 { 00:33:54.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:54.722 "dma_device_type": 2 00:33:54.722 } 00:33:54.722 ], 00:33:54.722 "driver_specific": {} 00:33:54.722 } 00:33:54.722 ] 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:54.722 "name": "Existed_Raid", 00:33:54.722 "uuid": "d2ed9ba1-1ab3-49f7-9918-8085606c6f99", 00:33:54.722 "strip_size_kb": 64, 00:33:54.722 "state": "configuring", 00:33:54.722 "raid_level": "raid0", 00:33:54.722 "superblock": true, 00:33:54.722 "num_base_bdevs": 3, 00:33:54.722 "num_base_bdevs_discovered": 2, 00:33:54.722 "num_base_bdevs_operational": 3, 00:33:54.722 "base_bdevs_list": [ 00:33:54.722 { 00:33:54.722 "name": "BaseBdev1", 00:33:54.722 "uuid": "95492a73-369f-45f5-9bcd-1924d304f1f8", 00:33:54.722 "is_configured": true, 00:33:54.722 "data_offset": 2048, 00:33:54.722 "data_size": 63488 00:33:54.722 }, 00:33:54.722 { 00:33:54.722 "name": null, 00:33:54.722 "uuid": "ad514138-573a-4944-8ce1-50911c78b859", 00:33:54.722 "is_configured": false, 00:33:54.722 "data_offset": 0, 00:33:54.722 "data_size": 63488 00:33:54.722 }, 00:33:54.722 { 00:33:54.722 "name": "BaseBdev3", 00:33:54.722 "uuid": "8082b6d6-7fc3-4970-9666-406d98783633", 00:33:54.722 "is_configured": true, 00:33:54.722 "data_offset": 2048, 00:33:54.722 "data_size": 63488 00:33:54.722 } 00:33:54.722 ] 00:33:54.722 }' 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:54.722 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.980 [2024-11-26 17:30:55.667328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:54.980 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:55.238 "name": "Existed_Raid", 00:33:55.238 "uuid": "d2ed9ba1-1ab3-49f7-9918-8085606c6f99", 00:33:55.238 "strip_size_kb": 64, 00:33:55.238 "state": "configuring", 00:33:55.238 "raid_level": "raid0", 00:33:55.238 "superblock": true, 00:33:55.238 "num_base_bdevs": 3, 00:33:55.238 "num_base_bdevs_discovered": 1, 00:33:55.238 "num_base_bdevs_operational": 3, 00:33:55.238 "base_bdevs_list": [ 00:33:55.238 { 00:33:55.238 "name": "BaseBdev1", 00:33:55.238 "uuid": "95492a73-369f-45f5-9bcd-1924d304f1f8", 00:33:55.238 "is_configured": true, 00:33:55.238 "data_offset": 2048, 00:33:55.238 "data_size": 63488 00:33:55.238 }, 00:33:55.238 { 00:33:55.238 "name": null, 00:33:55.238 "uuid": "ad514138-573a-4944-8ce1-50911c78b859", 00:33:55.238 "is_configured": false, 00:33:55.238 "data_offset": 0, 00:33:55.238 "data_size": 63488 00:33:55.238 }, 00:33:55.238 { 00:33:55.238 "name": null, 00:33:55.238 "uuid": "8082b6d6-7fc3-4970-9666-406d98783633", 00:33:55.238 "is_configured": false, 00:33:55.238 "data_offset": 0, 00:33:55.238 "data_size": 63488 00:33:55.238 } 00:33:55.238 ] 00:33:55.238 }' 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:55.238 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.496 [2024-11-26 17:30:56.134589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:55.496 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:55.497 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:55.497 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:55.497 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:55.497 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:55.497 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.497 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:55.497 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.497 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.497 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.755 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:55.755 "name": "Existed_Raid", 00:33:55.755 "uuid": "d2ed9ba1-1ab3-49f7-9918-8085606c6f99", 00:33:55.755 "strip_size_kb": 64, 00:33:55.755 "state": "configuring", 00:33:55.755 "raid_level": "raid0", 00:33:55.755 "superblock": true, 00:33:55.755 "num_base_bdevs": 3, 00:33:55.755 "num_base_bdevs_discovered": 2, 00:33:55.755 "num_base_bdevs_operational": 3, 00:33:55.755 "base_bdevs_list": [ 00:33:55.755 { 00:33:55.755 "name": "BaseBdev1", 00:33:55.755 "uuid": "95492a73-369f-45f5-9bcd-1924d304f1f8", 00:33:55.755 "is_configured": true, 00:33:55.755 "data_offset": 2048, 00:33:55.755 "data_size": 63488 00:33:55.755 }, 00:33:55.755 { 00:33:55.755 "name": null, 00:33:55.755 "uuid": "ad514138-573a-4944-8ce1-50911c78b859", 00:33:55.755 "is_configured": false, 00:33:55.755 "data_offset": 0, 00:33:55.755 "data_size": 63488 00:33:55.755 }, 00:33:55.755 { 00:33:55.755 "name": "BaseBdev3", 00:33:55.755 "uuid": "8082b6d6-7fc3-4970-9666-406d98783633", 00:33:55.755 "is_configured": true, 00:33:55.755 "data_offset": 2048, 00:33:55.755 "data_size": 63488 00:33:55.755 } 00:33:55.755 ] 00:33:55.755 }' 00:33:55.755 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:55.755 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.013 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.013 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:56.013 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.013 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.013 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.013 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:56.013 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:56.013 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.013 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.013 [2024-11-26 17:30:56.661732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:56.271 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:56.272 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.272 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.272 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.272 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.272 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:56.272 "name": "Existed_Raid", 00:33:56.272 "uuid": "d2ed9ba1-1ab3-49f7-9918-8085606c6f99", 00:33:56.272 "strip_size_kb": 64, 00:33:56.272 "state": "configuring", 00:33:56.272 "raid_level": "raid0", 00:33:56.272 "superblock": true, 00:33:56.272 "num_base_bdevs": 3, 00:33:56.272 "num_base_bdevs_discovered": 1, 00:33:56.272 "num_base_bdevs_operational": 3, 00:33:56.272 "base_bdevs_list": [ 00:33:56.272 { 00:33:56.272 "name": null, 00:33:56.272 "uuid": "95492a73-369f-45f5-9bcd-1924d304f1f8", 00:33:56.272 "is_configured": false, 00:33:56.272 "data_offset": 0, 00:33:56.272 "data_size": 63488 00:33:56.272 }, 00:33:56.272 { 00:33:56.272 "name": null, 00:33:56.272 "uuid": "ad514138-573a-4944-8ce1-50911c78b859", 00:33:56.272 "is_configured": false, 00:33:56.272 "data_offset": 0, 00:33:56.272 "data_size": 63488 00:33:56.272 }, 00:33:56.272 { 00:33:56.272 "name": "BaseBdev3", 00:33:56.272 "uuid": "8082b6d6-7fc3-4970-9666-406d98783633", 00:33:56.272 "is_configured": true, 00:33:56.272 "data_offset": 2048, 00:33:56.272 "data_size": 63488 00:33:56.272 } 00:33:56.272 ] 00:33:56.272 }' 00:33:56.272 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:56.272 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.530 [2024-11-26 17:30:57.177303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.530 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:56.531 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.788 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:56.788 "name": "Existed_Raid", 00:33:56.788 "uuid": "d2ed9ba1-1ab3-49f7-9918-8085606c6f99", 00:33:56.788 "strip_size_kb": 64, 00:33:56.788 "state": "configuring", 00:33:56.788 "raid_level": "raid0", 00:33:56.788 "superblock": true, 00:33:56.788 "num_base_bdevs": 3, 00:33:56.788 "num_base_bdevs_discovered": 2, 00:33:56.788 "num_base_bdevs_operational": 3, 00:33:56.788 "base_bdevs_list": [ 00:33:56.788 { 00:33:56.789 "name": null, 00:33:56.789 "uuid": "95492a73-369f-45f5-9bcd-1924d304f1f8", 00:33:56.789 "is_configured": false, 00:33:56.789 "data_offset": 0, 00:33:56.789 "data_size": 63488 00:33:56.789 }, 00:33:56.789 { 00:33:56.789 "name": "BaseBdev2", 00:33:56.789 "uuid": "ad514138-573a-4944-8ce1-50911c78b859", 00:33:56.789 "is_configured": true, 00:33:56.789 "data_offset": 2048, 00:33:56.789 "data_size": 63488 00:33:56.789 }, 00:33:56.789 { 00:33:56.789 "name": "BaseBdev3", 00:33:56.789 "uuid": "8082b6d6-7fc3-4970-9666-406d98783633", 00:33:56.789 "is_configured": true, 00:33:56.789 "data_offset": 2048, 00:33:56.789 "data_size": 63488 00:33:56.789 } 00:33:56.789 ] 00:33:56.789 }' 00:33:56.789 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:56.789 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 95492a73-369f-45f5-9bcd-1924d304f1f8 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.048 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.307 [2024-11-26 17:30:57.777999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:57.307 [2024-11-26 17:30:57.778231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:57.307 [2024-11-26 17:30:57.778264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:57.307 [2024-11-26 17:30:57.778530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:57.307 [2024-11-26 17:30:57.778725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:57.307 [2024-11-26 17:30:57.778735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:33:57.307 NewBaseBdev 00:33:57.307 [2024-11-26 17:30:57.778874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.307 [ 00:33:57.307 { 00:33:57.307 "name": "NewBaseBdev", 00:33:57.307 "aliases": [ 00:33:57.307 "95492a73-369f-45f5-9bcd-1924d304f1f8" 00:33:57.307 ], 00:33:57.307 "product_name": "Malloc disk", 00:33:57.307 "block_size": 512, 00:33:57.307 "num_blocks": 65536, 00:33:57.307 "uuid": "95492a73-369f-45f5-9bcd-1924d304f1f8", 00:33:57.307 "assigned_rate_limits": { 00:33:57.307 "rw_ios_per_sec": 0, 00:33:57.307 "rw_mbytes_per_sec": 0, 00:33:57.307 "r_mbytes_per_sec": 0, 00:33:57.307 "w_mbytes_per_sec": 0 00:33:57.307 }, 00:33:57.307 "claimed": true, 00:33:57.307 "claim_type": "exclusive_write", 00:33:57.307 "zoned": false, 00:33:57.307 "supported_io_types": { 00:33:57.307 "read": true, 00:33:57.307 "write": true, 00:33:57.307 "unmap": true, 00:33:57.307 "flush": true, 00:33:57.307 "reset": true, 00:33:57.307 "nvme_admin": false, 00:33:57.307 "nvme_io": false, 00:33:57.307 "nvme_io_md": false, 00:33:57.307 "write_zeroes": true, 00:33:57.307 "zcopy": true, 00:33:57.307 "get_zone_info": false, 00:33:57.307 "zone_management": false, 00:33:57.307 "zone_append": false, 00:33:57.307 "compare": false, 00:33:57.307 "compare_and_write": false, 00:33:57.307 "abort": true, 00:33:57.307 "seek_hole": false, 00:33:57.307 "seek_data": false, 00:33:57.307 "copy": true, 00:33:57.307 "nvme_iov_md": false 00:33:57.307 }, 00:33:57.307 "memory_domains": [ 00:33:57.307 { 00:33:57.307 "dma_device_id": "system", 00:33:57.307 "dma_device_type": 1 00:33:57.307 }, 00:33:57.307 { 00:33:57.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:57.307 "dma_device_type": 2 00:33:57.307 } 00:33:57.307 ], 00:33:57.307 "driver_specific": {} 00:33:57.307 } 00:33:57.307 ] 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.307 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:57.308 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.308 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.308 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.308 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:57.308 "name": "Existed_Raid", 00:33:57.308 "uuid": "d2ed9ba1-1ab3-49f7-9918-8085606c6f99", 00:33:57.308 "strip_size_kb": 64, 00:33:57.308 "state": "online", 00:33:57.308 "raid_level": "raid0", 00:33:57.308 "superblock": true, 00:33:57.308 "num_base_bdevs": 3, 00:33:57.308 "num_base_bdevs_discovered": 3, 00:33:57.308 "num_base_bdevs_operational": 3, 00:33:57.308 "base_bdevs_list": [ 00:33:57.308 { 00:33:57.308 "name": "NewBaseBdev", 00:33:57.308 "uuid": "95492a73-369f-45f5-9bcd-1924d304f1f8", 00:33:57.308 "is_configured": true, 00:33:57.308 "data_offset": 2048, 00:33:57.308 "data_size": 63488 00:33:57.308 }, 00:33:57.308 { 00:33:57.308 "name": "BaseBdev2", 00:33:57.308 "uuid": "ad514138-573a-4944-8ce1-50911c78b859", 00:33:57.308 "is_configured": true, 00:33:57.308 "data_offset": 2048, 00:33:57.308 "data_size": 63488 00:33:57.308 }, 00:33:57.308 { 00:33:57.308 "name": "BaseBdev3", 00:33:57.308 "uuid": "8082b6d6-7fc3-4970-9666-406d98783633", 00:33:57.308 "is_configured": true, 00:33:57.308 "data_offset": 2048, 00:33:57.308 "data_size": 63488 00:33:57.308 } 00:33:57.308 ] 00:33:57.308 }' 00:33:57.308 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:57.308 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.875 [2024-11-26 17:30:58.277519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.875 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:57.875 "name": "Existed_Raid", 00:33:57.875 "aliases": [ 00:33:57.875 "d2ed9ba1-1ab3-49f7-9918-8085606c6f99" 00:33:57.875 ], 00:33:57.875 "product_name": "Raid Volume", 00:33:57.875 "block_size": 512, 00:33:57.875 "num_blocks": 190464, 00:33:57.875 "uuid": "d2ed9ba1-1ab3-49f7-9918-8085606c6f99", 00:33:57.875 "assigned_rate_limits": { 00:33:57.875 "rw_ios_per_sec": 0, 00:33:57.875 "rw_mbytes_per_sec": 0, 00:33:57.875 "r_mbytes_per_sec": 0, 00:33:57.875 "w_mbytes_per_sec": 0 00:33:57.875 }, 00:33:57.875 "claimed": false, 00:33:57.875 "zoned": false, 00:33:57.875 "supported_io_types": { 00:33:57.875 "read": true, 00:33:57.875 "write": true, 00:33:57.875 "unmap": true, 00:33:57.875 "flush": true, 00:33:57.875 "reset": true, 00:33:57.875 "nvme_admin": false, 00:33:57.875 "nvme_io": false, 00:33:57.875 "nvme_io_md": false, 00:33:57.875 "write_zeroes": true, 00:33:57.875 "zcopy": false, 00:33:57.875 "get_zone_info": false, 00:33:57.875 "zone_management": false, 00:33:57.875 "zone_append": false, 00:33:57.875 "compare": false, 00:33:57.875 "compare_and_write": false, 00:33:57.875 "abort": false, 00:33:57.875 "seek_hole": false, 00:33:57.875 "seek_data": false, 00:33:57.875 "copy": false, 00:33:57.875 "nvme_iov_md": false 00:33:57.875 }, 00:33:57.875 "memory_domains": [ 00:33:57.875 { 00:33:57.875 "dma_device_id": "system", 00:33:57.875 "dma_device_type": 1 00:33:57.875 }, 00:33:57.875 { 00:33:57.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:57.875 "dma_device_type": 2 00:33:57.875 }, 00:33:57.875 { 00:33:57.876 "dma_device_id": "system", 00:33:57.876 "dma_device_type": 1 00:33:57.876 }, 00:33:57.876 { 00:33:57.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:57.876 "dma_device_type": 2 00:33:57.876 }, 00:33:57.876 { 00:33:57.876 "dma_device_id": "system", 00:33:57.876 "dma_device_type": 1 00:33:57.876 }, 00:33:57.876 { 00:33:57.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:57.876 "dma_device_type": 2 00:33:57.876 } 00:33:57.876 ], 00:33:57.876 "driver_specific": { 00:33:57.876 "raid": { 00:33:57.876 "uuid": "d2ed9ba1-1ab3-49f7-9918-8085606c6f99", 00:33:57.876 "strip_size_kb": 64, 00:33:57.876 "state": "online", 00:33:57.876 "raid_level": "raid0", 00:33:57.876 "superblock": true, 00:33:57.876 "num_base_bdevs": 3, 00:33:57.876 "num_base_bdevs_discovered": 3, 00:33:57.876 "num_base_bdevs_operational": 3, 00:33:57.876 "base_bdevs_list": [ 00:33:57.876 { 00:33:57.876 "name": "NewBaseBdev", 00:33:57.876 "uuid": "95492a73-369f-45f5-9bcd-1924d304f1f8", 00:33:57.876 "is_configured": true, 00:33:57.876 "data_offset": 2048, 00:33:57.876 "data_size": 63488 00:33:57.876 }, 00:33:57.876 { 00:33:57.876 "name": "BaseBdev2", 00:33:57.876 "uuid": "ad514138-573a-4944-8ce1-50911c78b859", 00:33:57.876 "is_configured": true, 00:33:57.876 "data_offset": 2048, 00:33:57.876 "data_size": 63488 00:33:57.876 }, 00:33:57.876 { 00:33:57.876 "name": "BaseBdev3", 00:33:57.876 "uuid": "8082b6d6-7fc3-4970-9666-406d98783633", 00:33:57.876 "is_configured": true, 00:33:57.876 "data_offset": 2048, 00:33:57.876 "data_size": 63488 00:33:57.876 } 00:33:57.876 ] 00:33:57.876 } 00:33:57.876 } 00:33:57.876 }' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:57.876 BaseBdev2 00:33:57.876 BaseBdev3' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.876 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.876 [2024-11-26 17:30:58.564718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:57.876 [2024-11-26 17:30:58.564799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:57.876 [2024-11-26 17:30:58.564925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:57.876 [2024-11-26 17:30:58.565013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:57.876 [2024-11-26 17:30:58.565067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64680 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64680 ']' 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64680 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64680 00:33:58.135 killing process with pid 64680 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64680' 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64680 00:33:58.135 [2024-11-26 17:30:58.609964] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:58.135 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64680 00:33:58.393 [2024-11-26 17:30:58.938482] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:59.766 ************************************ 00:33:59.766 END TEST raid_state_function_test_sb 00:33:59.766 ************************************ 00:33:59.766 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:33:59.766 00:33:59.766 real 0m10.649s 00:33:59.766 user 0m16.781s 00:33:59.766 sys 0m1.749s 00:33:59.766 17:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:59.766 17:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.766 17:31:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:33:59.766 17:31:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:59.766 17:31:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:59.766 17:31:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:59.766 ************************************ 00:33:59.766 START TEST raid_superblock_test 00:33:59.766 ************************************ 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65306 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65306 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65306 ']' 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:59.766 17:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:59.766 [2024-11-26 17:31:00.332572] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:59.766 [2024-11-26 17:31:00.332710] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65306 ] 00:34:00.022 [2024-11-26 17:31:00.510024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.022 [2024-11-26 17:31:00.628010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:00.279 [2024-11-26 17:31:00.836794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:00.279 [2024-11-26 17:31:00.836861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:00.535 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.535 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.536 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.793 malloc1 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.793 [2024-11-26 17:31:01.248830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:00.793 [2024-11-26 17:31:01.248948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:00.793 [2024-11-26 17:31:01.249003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:00.793 [2024-11-26 17:31:01.249039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:00.793 [2024-11-26 17:31:01.251250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:00.793 [2024-11-26 17:31:01.251320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:00.793 pt1 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.793 malloc2 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.793 [2024-11-26 17:31:01.311184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:00.793 [2024-11-26 17:31:01.311305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:00.793 [2024-11-26 17:31:01.311354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:00.793 [2024-11-26 17:31:01.311394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:00.793 [2024-11-26 17:31:01.313636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:00.793 [2024-11-26 17:31:01.313708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:00.793 pt2 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:34:00.793 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.794 malloc3 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.794 [2024-11-26 17:31:01.386607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:00.794 [2024-11-26 17:31:01.386734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:00.794 [2024-11-26 17:31:01.386775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:00.794 [2024-11-26 17:31:01.386806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:00.794 [2024-11-26 17:31:01.389018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:00.794 [2024-11-26 17:31:01.389109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:00.794 pt3 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.794 [2024-11-26 17:31:01.398644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:00.794 [2024-11-26 17:31:01.400645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:00.794 [2024-11-26 17:31:01.400766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:00.794 [2024-11-26 17:31:01.400987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:00.794 [2024-11-26 17:31:01.401042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:00.794 [2024-11-26 17:31:01.401358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:00.794 [2024-11-26 17:31:01.401592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:00.794 [2024-11-26 17:31:01.401637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:00.794 [2024-11-26 17:31:01.401872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:00.794 "name": "raid_bdev1", 00:34:00.794 "uuid": "867cb242-32b7-43a0-b4d4-c30cd90588c4", 00:34:00.794 "strip_size_kb": 64, 00:34:00.794 "state": "online", 00:34:00.794 "raid_level": "raid0", 00:34:00.794 "superblock": true, 00:34:00.794 "num_base_bdevs": 3, 00:34:00.794 "num_base_bdevs_discovered": 3, 00:34:00.794 "num_base_bdevs_operational": 3, 00:34:00.794 "base_bdevs_list": [ 00:34:00.794 { 00:34:00.794 "name": "pt1", 00:34:00.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:00.794 "is_configured": true, 00:34:00.794 "data_offset": 2048, 00:34:00.794 "data_size": 63488 00:34:00.794 }, 00:34:00.794 { 00:34:00.794 "name": "pt2", 00:34:00.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:00.794 "is_configured": true, 00:34:00.794 "data_offset": 2048, 00:34:00.794 "data_size": 63488 00:34:00.794 }, 00:34:00.794 { 00:34:00.794 "name": "pt3", 00:34:00.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:00.794 "is_configured": true, 00:34:00.794 "data_offset": 2048, 00:34:00.794 "data_size": 63488 00:34:00.794 } 00:34:00.794 ] 00:34:00.794 }' 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:00.794 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.360 [2024-11-26 17:31:01.826224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:01.360 "name": "raid_bdev1", 00:34:01.360 "aliases": [ 00:34:01.360 "867cb242-32b7-43a0-b4d4-c30cd90588c4" 00:34:01.360 ], 00:34:01.360 "product_name": "Raid Volume", 00:34:01.360 "block_size": 512, 00:34:01.360 "num_blocks": 190464, 00:34:01.360 "uuid": "867cb242-32b7-43a0-b4d4-c30cd90588c4", 00:34:01.360 "assigned_rate_limits": { 00:34:01.360 "rw_ios_per_sec": 0, 00:34:01.360 "rw_mbytes_per_sec": 0, 00:34:01.360 "r_mbytes_per_sec": 0, 00:34:01.360 "w_mbytes_per_sec": 0 00:34:01.360 }, 00:34:01.360 "claimed": false, 00:34:01.360 "zoned": false, 00:34:01.360 "supported_io_types": { 00:34:01.360 "read": true, 00:34:01.360 "write": true, 00:34:01.360 "unmap": true, 00:34:01.360 "flush": true, 00:34:01.360 "reset": true, 00:34:01.360 "nvme_admin": false, 00:34:01.360 "nvme_io": false, 00:34:01.360 "nvme_io_md": false, 00:34:01.360 "write_zeroes": true, 00:34:01.360 "zcopy": false, 00:34:01.360 "get_zone_info": false, 00:34:01.360 "zone_management": false, 00:34:01.360 "zone_append": false, 00:34:01.360 "compare": false, 00:34:01.360 "compare_and_write": false, 00:34:01.360 "abort": false, 00:34:01.360 "seek_hole": false, 00:34:01.360 "seek_data": false, 00:34:01.360 "copy": false, 00:34:01.360 "nvme_iov_md": false 00:34:01.360 }, 00:34:01.360 "memory_domains": [ 00:34:01.360 { 00:34:01.360 "dma_device_id": "system", 00:34:01.360 "dma_device_type": 1 00:34:01.360 }, 00:34:01.360 { 00:34:01.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:01.360 "dma_device_type": 2 00:34:01.360 }, 00:34:01.360 { 00:34:01.360 "dma_device_id": "system", 00:34:01.360 "dma_device_type": 1 00:34:01.360 }, 00:34:01.360 { 00:34:01.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:01.360 "dma_device_type": 2 00:34:01.360 }, 00:34:01.360 { 00:34:01.360 "dma_device_id": "system", 00:34:01.360 "dma_device_type": 1 00:34:01.360 }, 00:34:01.360 { 00:34:01.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:01.360 "dma_device_type": 2 00:34:01.360 } 00:34:01.360 ], 00:34:01.360 "driver_specific": { 00:34:01.360 "raid": { 00:34:01.360 "uuid": "867cb242-32b7-43a0-b4d4-c30cd90588c4", 00:34:01.360 "strip_size_kb": 64, 00:34:01.360 "state": "online", 00:34:01.360 "raid_level": "raid0", 00:34:01.360 "superblock": true, 00:34:01.360 "num_base_bdevs": 3, 00:34:01.360 "num_base_bdevs_discovered": 3, 00:34:01.360 "num_base_bdevs_operational": 3, 00:34:01.360 "base_bdevs_list": [ 00:34:01.360 { 00:34:01.360 "name": "pt1", 00:34:01.360 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:01.360 "is_configured": true, 00:34:01.360 "data_offset": 2048, 00:34:01.360 "data_size": 63488 00:34:01.360 }, 00:34:01.360 { 00:34:01.360 "name": "pt2", 00:34:01.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:01.360 "is_configured": true, 00:34:01.360 "data_offset": 2048, 00:34:01.360 "data_size": 63488 00:34:01.360 }, 00:34:01.360 { 00:34:01.360 "name": "pt3", 00:34:01.360 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:01.360 "is_configured": true, 00:34:01.360 "data_offset": 2048, 00:34:01.360 "data_size": 63488 00:34:01.360 } 00:34:01.360 ] 00:34:01.360 } 00:34:01.360 } 00:34:01.360 }' 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:01.360 pt2 00:34:01.360 pt3' 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.360 17:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.360 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.360 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:01.360 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:01.360 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:01.360 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:34:01.360 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:01.360 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.360 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.619 [2024-11-26 17:31:02.093794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=867cb242-32b7-43a0-b4d4-c30cd90588c4 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 867cb242-32b7-43a0-b4d4-c30cd90588c4 ']' 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.619 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.619 [2024-11-26 17:31:02.141359] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:01.619 [2024-11-26 17:31:02.141442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:01.619 [2024-11-26 17:31:02.141588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:01.619 [2024-11-26 17:31:02.141697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:01.620 [2024-11-26 17:31:02.141751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.620 [2024-11-26 17:31:02.289148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:01.620 [2024-11-26 17:31:02.291110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:01.620 [2024-11-26 17:31:02.291171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:01.620 [2024-11-26 17:31:02.291227] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:01.620 [2024-11-26 17:31:02.291283] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:01.620 [2024-11-26 17:31:02.291314] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:34:01.620 [2024-11-26 17:31:02.291331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:01.620 [2024-11-26 17:31:02.291343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:34:01.620 request: 00:34:01.620 { 00:34:01.620 "name": "raid_bdev1", 00:34:01.620 "raid_level": "raid0", 00:34:01.620 "base_bdevs": [ 00:34:01.620 "malloc1", 00:34:01.620 "malloc2", 00:34:01.620 "malloc3" 00:34:01.620 ], 00:34:01.620 "strip_size_kb": 64, 00:34:01.620 "superblock": false, 00:34:01.620 "method": "bdev_raid_create", 00:34:01.620 "req_id": 1 00:34:01.620 } 00:34:01.620 Got JSON-RPC error response 00:34:01.620 response: 00:34:01.620 { 00:34:01.620 "code": -17, 00:34:01.620 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:01.620 } 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.620 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.877 [2024-11-26 17:31:02.352973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:01.877 [2024-11-26 17:31:02.353073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:01.877 [2024-11-26 17:31:02.353127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:34:01.877 [2024-11-26 17:31:02.353167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:01.877 [2024-11-26 17:31:02.355383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:01.877 [2024-11-26 17:31:02.355453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:01.877 [2024-11-26 17:31:02.355599] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:01.877 [2024-11-26 17:31:02.355691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:01.877 pt1 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:01.877 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:01.878 "name": "raid_bdev1", 00:34:01.878 "uuid": "867cb242-32b7-43a0-b4d4-c30cd90588c4", 00:34:01.878 "strip_size_kb": 64, 00:34:01.878 "state": "configuring", 00:34:01.878 "raid_level": "raid0", 00:34:01.878 "superblock": true, 00:34:01.878 "num_base_bdevs": 3, 00:34:01.878 "num_base_bdevs_discovered": 1, 00:34:01.878 "num_base_bdevs_operational": 3, 00:34:01.878 "base_bdevs_list": [ 00:34:01.878 { 00:34:01.878 "name": "pt1", 00:34:01.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:01.878 "is_configured": true, 00:34:01.878 "data_offset": 2048, 00:34:01.878 "data_size": 63488 00:34:01.878 }, 00:34:01.878 { 00:34:01.878 "name": null, 00:34:01.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:01.878 "is_configured": false, 00:34:01.878 "data_offset": 2048, 00:34:01.878 "data_size": 63488 00:34:01.878 }, 00:34:01.878 { 00:34:01.878 "name": null, 00:34:01.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:01.878 "is_configured": false, 00:34:01.878 "data_offset": 2048, 00:34:01.878 "data_size": 63488 00:34:01.878 } 00:34:01.878 ] 00:34:01.878 }' 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:01.878 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.135 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:34:02.135 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:02.135 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.135 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.135 [2024-11-26 17:31:02.816286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:02.135 [2024-11-26 17:31:02.816373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:02.135 [2024-11-26 17:31:02.816405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:34:02.135 [2024-11-26 17:31:02.816417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:02.135 [2024-11-26 17:31:02.816937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:02.135 [2024-11-26 17:31:02.816958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:02.135 [2024-11-26 17:31:02.817049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:02.135 [2024-11-26 17:31:02.817080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:02.135 pt2 00:34:02.135 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.135 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:34:02.135 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.135 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.135 [2024-11-26 17:31:02.824266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:02.391 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.391 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:34:02.391 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:02.391 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:02.391 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:02.391 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:02.391 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:02.391 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:02.391 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:02.392 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:02.392 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:02.392 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.392 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.392 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.392 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.392 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.392 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:02.392 "name": "raid_bdev1", 00:34:02.392 "uuid": "867cb242-32b7-43a0-b4d4-c30cd90588c4", 00:34:02.392 "strip_size_kb": 64, 00:34:02.392 "state": "configuring", 00:34:02.392 "raid_level": "raid0", 00:34:02.392 "superblock": true, 00:34:02.392 "num_base_bdevs": 3, 00:34:02.392 "num_base_bdevs_discovered": 1, 00:34:02.392 "num_base_bdevs_operational": 3, 00:34:02.392 "base_bdevs_list": [ 00:34:02.392 { 00:34:02.392 "name": "pt1", 00:34:02.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:02.392 "is_configured": true, 00:34:02.392 "data_offset": 2048, 00:34:02.392 "data_size": 63488 00:34:02.392 }, 00:34:02.392 { 00:34:02.392 "name": null, 00:34:02.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:02.392 "is_configured": false, 00:34:02.392 "data_offset": 0, 00:34:02.392 "data_size": 63488 00:34:02.392 }, 00:34:02.392 { 00:34:02.392 "name": null, 00:34:02.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:02.392 "is_configured": false, 00:34:02.392 "data_offset": 2048, 00:34:02.392 "data_size": 63488 00:34:02.392 } 00:34:02.392 ] 00:34:02.392 }' 00:34:02.392 17:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:02.392 17:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.650 [2024-11-26 17:31:03.239586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:02.650 [2024-11-26 17:31:03.239673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:02.650 [2024-11-26 17:31:03.239694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:34:02.650 [2024-11-26 17:31:03.239707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:02.650 [2024-11-26 17:31:03.240229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:02.650 [2024-11-26 17:31:03.240271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:02.650 [2024-11-26 17:31:03.240364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:02.650 [2024-11-26 17:31:03.240392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:02.650 pt2 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.650 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.650 [2024-11-26 17:31:03.247555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:02.650 [2024-11-26 17:31:03.247658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:02.650 [2024-11-26 17:31:03.247678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:02.650 [2024-11-26 17:31:03.247690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:02.650 [2024-11-26 17:31:03.248137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:02.650 [2024-11-26 17:31:03.248178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:02.650 [2024-11-26 17:31:03.248259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:02.650 [2024-11-26 17:31:03.248286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:02.650 [2024-11-26 17:31:03.248437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:02.650 [2024-11-26 17:31:03.248456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:02.650 [2024-11-26 17:31:03.248778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:02.650 [2024-11-26 17:31:03.248941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:02.650 [2024-11-26 17:31:03.248951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:02.650 [2024-11-26 17:31:03.249101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:02.650 pt3 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:02.651 "name": "raid_bdev1", 00:34:02.651 "uuid": "867cb242-32b7-43a0-b4d4-c30cd90588c4", 00:34:02.651 "strip_size_kb": 64, 00:34:02.651 "state": "online", 00:34:02.651 "raid_level": "raid0", 00:34:02.651 "superblock": true, 00:34:02.651 "num_base_bdevs": 3, 00:34:02.651 "num_base_bdevs_discovered": 3, 00:34:02.651 "num_base_bdevs_operational": 3, 00:34:02.651 "base_bdevs_list": [ 00:34:02.651 { 00:34:02.651 "name": "pt1", 00:34:02.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:02.651 "is_configured": true, 00:34:02.651 "data_offset": 2048, 00:34:02.651 "data_size": 63488 00:34:02.651 }, 00:34:02.651 { 00:34:02.651 "name": "pt2", 00:34:02.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:02.651 "is_configured": true, 00:34:02.651 "data_offset": 2048, 00:34:02.651 "data_size": 63488 00:34:02.651 }, 00:34:02.651 { 00:34:02.651 "name": "pt3", 00:34:02.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:02.651 "is_configured": true, 00:34:02.651 "data_offset": 2048, 00:34:02.651 "data_size": 63488 00:34:02.651 } 00:34:02.651 ] 00:34:02.651 }' 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:02.651 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:03.217 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:03.217 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:03.217 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 [2024-11-26 17:31:03.687209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:03.218 "name": "raid_bdev1", 00:34:03.218 "aliases": [ 00:34:03.218 "867cb242-32b7-43a0-b4d4-c30cd90588c4" 00:34:03.218 ], 00:34:03.218 "product_name": "Raid Volume", 00:34:03.218 "block_size": 512, 00:34:03.218 "num_blocks": 190464, 00:34:03.218 "uuid": "867cb242-32b7-43a0-b4d4-c30cd90588c4", 00:34:03.218 "assigned_rate_limits": { 00:34:03.218 "rw_ios_per_sec": 0, 00:34:03.218 "rw_mbytes_per_sec": 0, 00:34:03.218 "r_mbytes_per_sec": 0, 00:34:03.218 "w_mbytes_per_sec": 0 00:34:03.218 }, 00:34:03.218 "claimed": false, 00:34:03.218 "zoned": false, 00:34:03.218 "supported_io_types": { 00:34:03.218 "read": true, 00:34:03.218 "write": true, 00:34:03.218 "unmap": true, 00:34:03.218 "flush": true, 00:34:03.218 "reset": true, 00:34:03.218 "nvme_admin": false, 00:34:03.218 "nvme_io": false, 00:34:03.218 "nvme_io_md": false, 00:34:03.218 "write_zeroes": true, 00:34:03.218 "zcopy": false, 00:34:03.218 "get_zone_info": false, 00:34:03.218 "zone_management": false, 00:34:03.218 "zone_append": false, 00:34:03.218 "compare": false, 00:34:03.218 "compare_and_write": false, 00:34:03.218 "abort": false, 00:34:03.218 "seek_hole": false, 00:34:03.218 "seek_data": false, 00:34:03.218 "copy": false, 00:34:03.218 "nvme_iov_md": false 00:34:03.218 }, 00:34:03.218 "memory_domains": [ 00:34:03.218 { 00:34:03.218 "dma_device_id": "system", 00:34:03.218 "dma_device_type": 1 00:34:03.218 }, 00:34:03.218 { 00:34:03.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:03.218 "dma_device_type": 2 00:34:03.218 }, 00:34:03.218 { 00:34:03.218 "dma_device_id": "system", 00:34:03.218 "dma_device_type": 1 00:34:03.218 }, 00:34:03.218 { 00:34:03.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:03.218 "dma_device_type": 2 00:34:03.218 }, 00:34:03.218 { 00:34:03.218 "dma_device_id": "system", 00:34:03.218 "dma_device_type": 1 00:34:03.218 }, 00:34:03.218 { 00:34:03.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:03.218 "dma_device_type": 2 00:34:03.218 } 00:34:03.218 ], 00:34:03.218 "driver_specific": { 00:34:03.218 "raid": { 00:34:03.218 "uuid": "867cb242-32b7-43a0-b4d4-c30cd90588c4", 00:34:03.218 "strip_size_kb": 64, 00:34:03.218 "state": "online", 00:34:03.218 "raid_level": "raid0", 00:34:03.218 "superblock": true, 00:34:03.218 "num_base_bdevs": 3, 00:34:03.218 "num_base_bdevs_discovered": 3, 00:34:03.218 "num_base_bdevs_operational": 3, 00:34:03.218 "base_bdevs_list": [ 00:34:03.218 { 00:34:03.218 "name": "pt1", 00:34:03.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:03.218 "is_configured": true, 00:34:03.218 "data_offset": 2048, 00:34:03.218 "data_size": 63488 00:34:03.218 }, 00:34:03.218 { 00:34:03.218 "name": "pt2", 00:34:03.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:03.218 "is_configured": true, 00:34:03.218 "data_offset": 2048, 00:34:03.218 "data_size": 63488 00:34:03.218 }, 00:34:03.218 { 00:34:03.218 "name": "pt3", 00:34:03.218 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:03.218 "is_configured": true, 00:34:03.218 "data_offset": 2048, 00:34:03.218 "data_size": 63488 00:34:03.218 } 00:34:03.218 ] 00:34:03.218 } 00:34:03.218 } 00:34:03.218 }' 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:03.218 pt2 00:34:03.218 pt3' 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:03.218 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:03.476 [2024-11-26 17:31:03.974706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 867cb242-32b7-43a0-b4d4-c30cd90588c4 '!=' 867cb242-32b7-43a0-b4d4-c30cd90588c4 ']' 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:03.476 17:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65306 00:34:03.476 17:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65306 ']' 00:34:03.476 17:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65306 00:34:03.476 17:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:34:03.476 17:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.476 17:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65306 00:34:03.477 17:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:03.477 17:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:03.477 17:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65306' 00:34:03.477 killing process with pid 65306 00:34:03.477 17:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65306 00:34:03.477 [2024-11-26 17:31:04.030938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:03.477 17:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65306 00:34:03.477 [2024-11-26 17:31:04.031144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:03.477 [2024-11-26 17:31:04.031217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:03.477 [2024-11-26 17:31:04.031231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:03.735 [2024-11-26 17:31:04.390160] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:05.122 17:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:34:05.122 00:34:05.122 real 0m5.468s 00:34:05.122 user 0m7.712s 00:34:05.122 sys 0m0.867s 00:34:05.122 17:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:05.122 ************************************ 00:34:05.122 END TEST raid_superblock_test 00:34:05.122 ************************************ 00:34:05.122 17:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:05.122 17:31:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:34:05.122 17:31:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:05.122 17:31:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:05.122 17:31:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:05.122 ************************************ 00:34:05.122 START TEST raid_read_error_test 00:34:05.122 ************************************ 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gnbNVkZiLy 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65559 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65559 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65559 ']' 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:05.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.122 17:31:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:05.381 [2024-11-26 17:31:05.873855] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:05.381 [2024-11-26 17:31:05.873972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65559 ] 00:34:05.381 [2024-11-26 17:31:06.049878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.640 [2024-11-26 17:31:06.171279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.898 [2024-11-26 17:31:06.383821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:05.898 [2024-11-26 17:31:06.383900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:06.155 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.156 BaseBdev1_malloc 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.156 true 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.156 [2024-11-26 17:31:06.785680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:34:06.156 [2024-11-26 17:31:06.785749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:06.156 [2024-11-26 17:31:06.785773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:06.156 [2024-11-26 17:31:06.785785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:06.156 [2024-11-26 17:31:06.788213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:06.156 [2024-11-26 17:31:06.788262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:06.156 BaseBdev1 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.156 BaseBdev2_malloc 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.156 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.414 true 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.414 [2024-11-26 17:31:06.856044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:34:06.414 [2024-11-26 17:31:06.856104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:06.414 [2024-11-26 17:31:06.856123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:06.414 [2024-11-26 17:31:06.856133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:06.414 [2024-11-26 17:31:06.858254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:06.414 [2024-11-26 17:31:06.858295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:06.414 BaseBdev2 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.414 BaseBdev3_malloc 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.414 true 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.414 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.414 [2024-11-26 17:31:06.934464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:34:06.414 [2024-11-26 17:31:06.934603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:06.414 [2024-11-26 17:31:06.934630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:06.414 [2024-11-26 17:31:06.934643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:06.414 [2024-11-26 17:31:06.936951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:06.415 [2024-11-26 17:31:06.936995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:06.415 BaseBdev3 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.415 [2024-11-26 17:31:06.946516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:06.415 [2024-11-26 17:31:06.948366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:06.415 [2024-11-26 17:31:06.948446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:06.415 [2024-11-26 17:31:06.948677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:06.415 [2024-11-26 17:31:06.948694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:06.415 [2024-11-26 17:31:06.948960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:34:06.415 [2024-11-26 17:31:06.949128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:06.415 [2024-11-26 17:31:06.949141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:06.415 [2024-11-26 17:31:06.949311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.415 17:31:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.415 17:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:06.415 "name": "raid_bdev1", 00:34:06.415 "uuid": "a794845b-1135-431d-84bd-ee149d1d6f08", 00:34:06.415 "strip_size_kb": 64, 00:34:06.415 "state": "online", 00:34:06.415 "raid_level": "raid0", 00:34:06.415 "superblock": true, 00:34:06.415 "num_base_bdevs": 3, 00:34:06.415 "num_base_bdevs_discovered": 3, 00:34:06.415 "num_base_bdevs_operational": 3, 00:34:06.415 "base_bdevs_list": [ 00:34:06.415 { 00:34:06.415 "name": "BaseBdev1", 00:34:06.415 "uuid": "591c925c-d394-55a0-a14e-618c1943a680", 00:34:06.415 "is_configured": true, 00:34:06.415 "data_offset": 2048, 00:34:06.415 "data_size": 63488 00:34:06.415 }, 00:34:06.415 { 00:34:06.415 "name": "BaseBdev2", 00:34:06.415 "uuid": "264d4030-2394-598a-919f-4e84ad15de2c", 00:34:06.415 "is_configured": true, 00:34:06.415 "data_offset": 2048, 00:34:06.415 "data_size": 63488 00:34:06.415 }, 00:34:06.415 { 00:34:06.415 "name": "BaseBdev3", 00:34:06.415 "uuid": "b56f7596-7979-5180-b8d9-8298aa8becdc", 00:34:06.415 "is_configured": true, 00:34:06.415 "data_offset": 2048, 00:34:06.415 "data_size": 63488 00:34:06.415 } 00:34:06.415 ] 00:34:06.415 }' 00:34:06.415 17:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:06.415 17:31:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:06.980 17:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:34:06.980 17:31:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:06.980 [2024-11-26 17:31:07.546860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:07.914 "name": "raid_bdev1", 00:34:07.914 "uuid": "a794845b-1135-431d-84bd-ee149d1d6f08", 00:34:07.914 "strip_size_kb": 64, 00:34:07.914 "state": "online", 00:34:07.914 "raid_level": "raid0", 00:34:07.914 "superblock": true, 00:34:07.914 "num_base_bdevs": 3, 00:34:07.914 "num_base_bdevs_discovered": 3, 00:34:07.914 "num_base_bdevs_operational": 3, 00:34:07.914 "base_bdevs_list": [ 00:34:07.914 { 00:34:07.914 "name": "BaseBdev1", 00:34:07.914 "uuid": "591c925c-d394-55a0-a14e-618c1943a680", 00:34:07.914 "is_configured": true, 00:34:07.914 "data_offset": 2048, 00:34:07.914 "data_size": 63488 00:34:07.914 }, 00:34:07.914 { 00:34:07.914 "name": "BaseBdev2", 00:34:07.914 "uuid": "264d4030-2394-598a-919f-4e84ad15de2c", 00:34:07.914 "is_configured": true, 00:34:07.914 "data_offset": 2048, 00:34:07.914 "data_size": 63488 00:34:07.914 }, 00:34:07.914 { 00:34:07.914 "name": "BaseBdev3", 00:34:07.914 "uuid": "b56f7596-7979-5180-b8d9-8298aa8becdc", 00:34:07.914 "is_configured": true, 00:34:07.914 "data_offset": 2048, 00:34:07.914 "data_size": 63488 00:34:07.914 } 00:34:07.914 ] 00:34:07.914 }' 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:07.914 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:08.171 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:08.171 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.171 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:08.171 [2024-11-26 17:31:08.811187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:08.171 [2024-11-26 17:31:08.811219] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:08.171 [2024-11-26 17:31:08.814459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:08.171 [2024-11-26 17:31:08.814527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:08.171 [2024-11-26 17:31:08.814573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:08.171 [2024-11-26 17:31:08.814585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:08.171 { 00:34:08.171 "results": [ 00:34:08.171 { 00:34:08.171 "job": "raid_bdev1", 00:34:08.171 "core_mask": "0x1", 00:34:08.171 "workload": "randrw", 00:34:08.171 "percentage": 50, 00:34:08.171 "status": "finished", 00:34:08.171 "queue_depth": 1, 00:34:08.171 "io_size": 131072, 00:34:08.171 "runtime": 1.264861, 00:34:08.171 "iops": 14308.2915830277, 00:34:08.171 "mibps": 1788.5364478784625, 00:34:08.171 "io_failed": 1, 00:34:08.171 "io_timeout": 0, 00:34:08.171 "avg_latency_us": 96.65122891539521, 00:34:08.171 "min_latency_us": 22.91703056768559, 00:34:08.171 "max_latency_us": 1781.4917030567685 00:34:08.171 } 00:34:08.171 ], 00:34:08.171 "core_count": 1 00:34:08.171 } 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65559 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65559 ']' 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65559 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65559 00:34:08.172 killing process with pid 65559 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65559' 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65559 00:34:08.172 [2024-11-26 17:31:08.848289] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:08.172 17:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65559 00:34:08.429 [2024-11-26 17:31:09.116770] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:10.346 17:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:34:10.346 17:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gnbNVkZiLy 00:34:10.346 17:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:34:10.346 17:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:34:10.346 17:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:34:10.346 17:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:10.346 17:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:10.346 ************************************ 00:34:10.346 END TEST raid_read_error_test 00:34:10.346 ************************************ 00:34:10.346 17:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:34:10.346 00:34:10.346 real 0m4.749s 00:34:10.346 user 0m5.585s 00:34:10.346 sys 0m0.540s 00:34:10.346 17:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.346 17:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:10.346 17:31:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:34:10.346 17:31:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:10.346 17:31:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.346 17:31:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:10.346 ************************************ 00:34:10.346 START TEST raid_write_error_test 00:34:10.346 ************************************ 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.orpRMTuIfC 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65705 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65705 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65705 ']' 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.346 17:31:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:10.346 [2024-11-26 17:31:10.690300] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:10.346 [2024-11-26 17:31:10.690529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65705 ] 00:34:10.346 [2024-11-26 17:31:10.871158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.346 [2024-11-26 17:31:11.003113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.605 [2024-11-26 17:31:11.225507] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:10.605 [2024-11-26 17:31:11.225674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:10.863 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.863 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:34:10.863 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:10.863 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:10.863 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.863 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.123 BaseBdev1_malloc 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.123 true 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.123 [2024-11-26 17:31:11.619929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:34:11.123 [2024-11-26 17:31:11.619997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.123 [2024-11-26 17:31:11.620023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:11.123 [2024-11-26 17:31:11.620036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.123 [2024-11-26 17:31:11.622642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.123 [2024-11-26 17:31:11.622698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:11.123 BaseBdev1 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.123 BaseBdev2_malloc 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.123 true 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.123 [2024-11-26 17:31:11.685464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:34:11.123 [2024-11-26 17:31:11.685644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.123 [2024-11-26 17:31:11.685674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:11.123 [2024-11-26 17:31:11.685689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.123 [2024-11-26 17:31:11.688220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.123 [2024-11-26 17:31:11.688262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:11.123 BaseBdev2 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.123 BaseBdev3_malloc 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.123 true 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.123 [2024-11-26 17:31:11.767727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:34:11.123 [2024-11-26 17:31:11.767799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.123 [2024-11-26 17:31:11.767821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:11.123 [2024-11-26 17:31:11.767834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.123 [2024-11-26 17:31:11.770273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.123 [2024-11-26 17:31:11.770321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:11.123 BaseBdev3 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.123 [2024-11-26 17:31:11.775843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:11.123 [2024-11-26 17:31:11.777930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:11.123 [2024-11-26 17:31:11.778016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:11.123 [2024-11-26 17:31:11.778250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:11.123 [2024-11-26 17:31:11.778279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:11.123 [2024-11-26 17:31:11.778656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:34:11.123 [2024-11-26 17:31:11.778861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:11.123 [2024-11-26 17:31:11.778878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:11.123 [2024-11-26 17:31:11.779088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:11.123 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:11.124 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:11.124 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:11.124 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.124 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:11.124 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.124 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.124 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.124 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:11.124 "name": "raid_bdev1", 00:34:11.124 "uuid": "f445647d-034c-4190-9ae0-e96cd21922fd", 00:34:11.124 "strip_size_kb": 64, 00:34:11.124 "state": "online", 00:34:11.124 "raid_level": "raid0", 00:34:11.124 "superblock": true, 00:34:11.124 "num_base_bdevs": 3, 00:34:11.124 "num_base_bdevs_discovered": 3, 00:34:11.124 "num_base_bdevs_operational": 3, 00:34:11.124 "base_bdevs_list": [ 00:34:11.124 { 00:34:11.124 "name": "BaseBdev1", 00:34:11.124 "uuid": "df4feca3-2471-59a2-9830-8fdc78e6c26d", 00:34:11.124 "is_configured": true, 00:34:11.124 "data_offset": 2048, 00:34:11.124 "data_size": 63488 00:34:11.124 }, 00:34:11.124 { 00:34:11.124 "name": "BaseBdev2", 00:34:11.124 "uuid": "4206628e-8241-5754-8603-49d2c1489fac", 00:34:11.124 "is_configured": true, 00:34:11.124 "data_offset": 2048, 00:34:11.124 "data_size": 63488 00:34:11.124 }, 00:34:11.124 { 00:34:11.124 "name": "BaseBdev3", 00:34:11.124 "uuid": "7200de67-ddb4-5b04-a2f9-7300c25c2570", 00:34:11.124 "is_configured": true, 00:34:11.124 "data_offset": 2048, 00:34:11.124 "data_size": 63488 00:34:11.124 } 00:34:11.124 ] 00:34:11.124 }' 00:34:11.124 17:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:11.124 17:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.690 17:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:34:11.690 17:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:11.690 [2024-11-26 17:31:12.216621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:12.625 "name": "raid_bdev1", 00:34:12.625 "uuid": "f445647d-034c-4190-9ae0-e96cd21922fd", 00:34:12.625 "strip_size_kb": 64, 00:34:12.625 "state": "online", 00:34:12.625 "raid_level": "raid0", 00:34:12.625 "superblock": true, 00:34:12.625 "num_base_bdevs": 3, 00:34:12.625 "num_base_bdevs_discovered": 3, 00:34:12.625 "num_base_bdevs_operational": 3, 00:34:12.625 "base_bdevs_list": [ 00:34:12.625 { 00:34:12.625 "name": "BaseBdev1", 00:34:12.625 "uuid": "df4feca3-2471-59a2-9830-8fdc78e6c26d", 00:34:12.625 "is_configured": true, 00:34:12.625 "data_offset": 2048, 00:34:12.625 "data_size": 63488 00:34:12.625 }, 00:34:12.625 { 00:34:12.625 "name": "BaseBdev2", 00:34:12.625 "uuid": "4206628e-8241-5754-8603-49d2c1489fac", 00:34:12.625 "is_configured": true, 00:34:12.625 "data_offset": 2048, 00:34:12.625 "data_size": 63488 00:34:12.625 }, 00:34:12.625 { 00:34:12.625 "name": "BaseBdev3", 00:34:12.625 "uuid": "7200de67-ddb4-5b04-a2f9-7300c25c2570", 00:34:12.625 "is_configured": true, 00:34:12.625 "data_offset": 2048, 00:34:12.625 "data_size": 63488 00:34:12.625 } 00:34:12.625 ] 00:34:12.625 }' 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:12.625 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.885 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:12.885 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.885 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.885 [2024-11-26 17:31:13.549526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:12.885 [2024-11-26 17:31:13.549619] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:12.885 [2024-11-26 17:31:13.552892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:12.885 [2024-11-26 17:31:13.552984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:12.885 [2024-11-26 17:31:13.553048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:12.885 [2024-11-26 17:31:13.553094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:12.885 { 00:34:12.885 "results": [ 00:34:12.885 { 00:34:12.885 "job": "raid_bdev1", 00:34:12.885 "core_mask": "0x1", 00:34:12.885 "workload": "randrw", 00:34:12.885 "percentage": 50, 00:34:12.885 "status": "finished", 00:34:12.885 "queue_depth": 1, 00:34:12.885 "io_size": 131072, 00:34:12.885 "runtime": 1.333518, 00:34:12.885 "iops": 13484.63237841559, 00:34:12.885 "mibps": 1685.5790473019488, 00:34:12.885 "io_failed": 1, 00:34:12.885 "io_timeout": 0, 00:34:12.885 "avg_latency_us": 102.47600540733886, 00:34:12.885 "min_latency_us": 28.28296943231441, 00:34:12.885 "max_latency_us": 1752.8733624454148 00:34:12.885 } 00:34:12.885 ], 00:34:12.885 "core_count": 1 00:34:12.885 } 00:34:12.885 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.885 17:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65705 00:34:12.885 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65705 ']' 00:34:12.885 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65705 00:34:12.885 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:34:12.885 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:12.885 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65705 00:34:13.143 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:13.143 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:13.143 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65705' 00:34:13.143 killing process with pid 65705 00:34:13.143 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65705 00:34:13.143 [2024-11-26 17:31:13.589732] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:13.143 17:31:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65705 00:34:13.402 [2024-11-26 17:31:13.865056] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:14.775 17:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.orpRMTuIfC 00:34:14.775 17:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:34:14.775 17:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:34:14.775 17:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:34:14.775 17:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:34:14.775 17:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:14.775 17:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:14.775 ************************************ 00:34:14.775 END TEST raid_write_error_test 00:34:14.775 ************************************ 00:34:14.775 17:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:34:14.775 00:34:14.775 real 0m4.626s 00:34:14.775 user 0m5.373s 00:34:14.775 sys 0m0.517s 00:34:14.775 17:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:14.775 17:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.775 17:31:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:34:14.775 17:31:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:34:14.775 17:31:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:14.775 17:31:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:14.775 17:31:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:14.775 ************************************ 00:34:14.775 START TEST raid_state_function_test 00:34:14.775 ************************************ 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65848 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:14.775 Process raid pid: 65848 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65848' 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65848 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65848 ']' 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.775 17:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.775 [2024-11-26 17:31:15.363048] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:14.775 [2024-11-26 17:31:15.363255] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.034 [2024-11-26 17:31:15.534656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.034 [2024-11-26 17:31:15.668152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.292 [2024-11-26 17:31:15.912499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:15.292 [2024-11-26 17:31:15.912558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:15.860 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.860 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:34:15.860 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:15.860 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.860 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.860 [2024-11-26 17:31:16.256833] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:15.860 [2024-11-26 17:31:16.256958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:15.860 [2024-11-26 17:31:16.256976] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:15.860 [2024-11-26 17:31:16.256988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:15.860 [2024-11-26 17:31:16.256996] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:15.860 [2024-11-26 17:31:16.257007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:15.860 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.860 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:15.860 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:15.860 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:15.861 "name": "Existed_Raid", 00:34:15.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:15.861 "strip_size_kb": 64, 00:34:15.861 "state": "configuring", 00:34:15.861 "raid_level": "concat", 00:34:15.861 "superblock": false, 00:34:15.861 "num_base_bdevs": 3, 00:34:15.861 "num_base_bdevs_discovered": 0, 00:34:15.861 "num_base_bdevs_operational": 3, 00:34:15.861 "base_bdevs_list": [ 00:34:15.861 { 00:34:15.861 "name": "BaseBdev1", 00:34:15.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:15.861 "is_configured": false, 00:34:15.861 "data_offset": 0, 00:34:15.861 "data_size": 0 00:34:15.861 }, 00:34:15.861 { 00:34:15.861 "name": "BaseBdev2", 00:34:15.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:15.861 "is_configured": false, 00:34:15.861 "data_offset": 0, 00:34:15.861 "data_size": 0 00:34:15.861 }, 00:34:15.861 { 00:34:15.861 "name": "BaseBdev3", 00:34:15.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:15.861 "is_configured": false, 00:34:15.861 "data_offset": 0, 00:34:15.861 "data_size": 0 00:34:15.861 } 00:34:15.861 ] 00:34:15.861 }' 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:15.861 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.120 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:16.120 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.120 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.120 [2024-11-26 17:31:16.672110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:16.120 [2024-11-26 17:31:16.672156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.121 [2024-11-26 17:31:16.684096] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:16.121 [2024-11-26 17:31:16.684201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:16.121 [2024-11-26 17:31:16.684254] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:16.121 [2024-11-26 17:31:16.684282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:16.121 [2024-11-26 17:31:16.684306] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:16.121 [2024-11-26 17:31:16.684389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.121 [2024-11-26 17:31:16.732034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:16.121 BaseBdev1 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.121 [ 00:34:16.121 { 00:34:16.121 "name": "BaseBdev1", 00:34:16.121 "aliases": [ 00:34:16.121 "813aa0bc-187f-40cb-aaf4-12bd3fdec7ff" 00:34:16.121 ], 00:34:16.121 "product_name": "Malloc disk", 00:34:16.121 "block_size": 512, 00:34:16.121 "num_blocks": 65536, 00:34:16.121 "uuid": "813aa0bc-187f-40cb-aaf4-12bd3fdec7ff", 00:34:16.121 "assigned_rate_limits": { 00:34:16.121 "rw_ios_per_sec": 0, 00:34:16.121 "rw_mbytes_per_sec": 0, 00:34:16.121 "r_mbytes_per_sec": 0, 00:34:16.121 "w_mbytes_per_sec": 0 00:34:16.121 }, 00:34:16.121 "claimed": true, 00:34:16.121 "claim_type": "exclusive_write", 00:34:16.121 "zoned": false, 00:34:16.121 "supported_io_types": { 00:34:16.121 "read": true, 00:34:16.121 "write": true, 00:34:16.121 "unmap": true, 00:34:16.121 "flush": true, 00:34:16.121 "reset": true, 00:34:16.121 "nvme_admin": false, 00:34:16.121 "nvme_io": false, 00:34:16.121 "nvme_io_md": false, 00:34:16.121 "write_zeroes": true, 00:34:16.121 "zcopy": true, 00:34:16.121 "get_zone_info": false, 00:34:16.121 "zone_management": false, 00:34:16.121 "zone_append": false, 00:34:16.121 "compare": false, 00:34:16.121 "compare_and_write": false, 00:34:16.121 "abort": true, 00:34:16.121 "seek_hole": false, 00:34:16.121 "seek_data": false, 00:34:16.121 "copy": true, 00:34:16.121 "nvme_iov_md": false 00:34:16.121 }, 00:34:16.121 "memory_domains": [ 00:34:16.121 { 00:34:16.121 "dma_device_id": "system", 00:34:16.121 "dma_device_type": 1 00:34:16.121 }, 00:34:16.121 { 00:34:16.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:16.121 "dma_device_type": 2 00:34:16.121 } 00:34:16.121 ], 00:34:16.121 "driver_specific": {} 00:34:16.121 } 00:34:16.121 ] 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:16.121 "name": "Existed_Raid", 00:34:16.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.121 "strip_size_kb": 64, 00:34:16.121 "state": "configuring", 00:34:16.121 "raid_level": "concat", 00:34:16.121 "superblock": false, 00:34:16.121 "num_base_bdevs": 3, 00:34:16.121 "num_base_bdevs_discovered": 1, 00:34:16.121 "num_base_bdevs_operational": 3, 00:34:16.121 "base_bdevs_list": [ 00:34:16.121 { 00:34:16.121 "name": "BaseBdev1", 00:34:16.121 "uuid": "813aa0bc-187f-40cb-aaf4-12bd3fdec7ff", 00:34:16.121 "is_configured": true, 00:34:16.121 "data_offset": 0, 00:34:16.121 "data_size": 65536 00:34:16.121 }, 00:34:16.121 { 00:34:16.121 "name": "BaseBdev2", 00:34:16.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.121 "is_configured": false, 00:34:16.121 "data_offset": 0, 00:34:16.121 "data_size": 0 00:34:16.121 }, 00:34:16.121 { 00:34:16.121 "name": "BaseBdev3", 00:34:16.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.121 "is_configured": false, 00:34:16.121 "data_offset": 0, 00:34:16.121 "data_size": 0 00:34:16.121 } 00:34:16.121 ] 00:34:16.121 }' 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:16.121 17:31:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.702 [2024-11-26 17:31:17.155530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:16.702 [2024-11-26 17:31:17.155711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.702 [2024-11-26 17:31:17.167609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:16.702 [2024-11-26 17:31:17.170184] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:16.702 [2024-11-26 17:31:17.170262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:16.702 [2024-11-26 17:31:17.170278] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:16.702 [2024-11-26 17:31:17.170293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:16.702 "name": "Existed_Raid", 00:34:16.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.702 "strip_size_kb": 64, 00:34:16.702 "state": "configuring", 00:34:16.702 "raid_level": "concat", 00:34:16.702 "superblock": false, 00:34:16.702 "num_base_bdevs": 3, 00:34:16.702 "num_base_bdevs_discovered": 1, 00:34:16.702 "num_base_bdevs_operational": 3, 00:34:16.702 "base_bdevs_list": [ 00:34:16.702 { 00:34:16.702 "name": "BaseBdev1", 00:34:16.702 "uuid": "813aa0bc-187f-40cb-aaf4-12bd3fdec7ff", 00:34:16.702 "is_configured": true, 00:34:16.702 "data_offset": 0, 00:34:16.702 "data_size": 65536 00:34:16.702 }, 00:34:16.702 { 00:34:16.702 "name": "BaseBdev2", 00:34:16.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.702 "is_configured": false, 00:34:16.702 "data_offset": 0, 00:34:16.702 "data_size": 0 00:34:16.702 }, 00:34:16.702 { 00:34:16.702 "name": "BaseBdev3", 00:34:16.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.702 "is_configured": false, 00:34:16.702 "data_offset": 0, 00:34:16.702 "data_size": 0 00:34:16.702 } 00:34:16.702 ] 00:34:16.702 }' 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:16.702 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.977 [2024-11-26 17:31:17.655775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:16.977 BaseBdev2 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.977 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.235 [ 00:34:17.235 { 00:34:17.235 "name": "BaseBdev2", 00:34:17.235 "aliases": [ 00:34:17.235 "d0a059cf-5b88-415b-b7ec-3bb2f25847da" 00:34:17.235 ], 00:34:17.235 "product_name": "Malloc disk", 00:34:17.235 "block_size": 512, 00:34:17.235 "num_blocks": 65536, 00:34:17.235 "uuid": "d0a059cf-5b88-415b-b7ec-3bb2f25847da", 00:34:17.235 "assigned_rate_limits": { 00:34:17.235 "rw_ios_per_sec": 0, 00:34:17.235 "rw_mbytes_per_sec": 0, 00:34:17.235 "r_mbytes_per_sec": 0, 00:34:17.235 "w_mbytes_per_sec": 0 00:34:17.235 }, 00:34:17.235 "claimed": true, 00:34:17.235 "claim_type": "exclusive_write", 00:34:17.235 "zoned": false, 00:34:17.235 "supported_io_types": { 00:34:17.235 "read": true, 00:34:17.235 "write": true, 00:34:17.235 "unmap": true, 00:34:17.235 "flush": true, 00:34:17.235 "reset": true, 00:34:17.235 "nvme_admin": false, 00:34:17.235 "nvme_io": false, 00:34:17.235 "nvme_io_md": false, 00:34:17.235 "write_zeroes": true, 00:34:17.235 "zcopy": true, 00:34:17.235 "get_zone_info": false, 00:34:17.235 "zone_management": false, 00:34:17.235 "zone_append": false, 00:34:17.235 "compare": false, 00:34:17.235 "compare_and_write": false, 00:34:17.235 "abort": true, 00:34:17.235 "seek_hole": false, 00:34:17.235 "seek_data": false, 00:34:17.235 "copy": true, 00:34:17.235 "nvme_iov_md": false 00:34:17.235 }, 00:34:17.235 "memory_domains": [ 00:34:17.235 { 00:34:17.235 "dma_device_id": "system", 00:34:17.235 "dma_device_type": 1 00:34:17.235 }, 00:34:17.235 { 00:34:17.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.235 "dma_device_type": 2 00:34:17.235 } 00:34:17.235 ], 00:34:17.235 "driver_specific": {} 00:34:17.235 } 00:34:17.235 ] 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.235 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:17.235 "name": "Existed_Raid", 00:34:17.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.235 "strip_size_kb": 64, 00:34:17.235 "state": "configuring", 00:34:17.235 "raid_level": "concat", 00:34:17.235 "superblock": false, 00:34:17.235 "num_base_bdevs": 3, 00:34:17.235 "num_base_bdevs_discovered": 2, 00:34:17.235 "num_base_bdevs_operational": 3, 00:34:17.235 "base_bdevs_list": [ 00:34:17.235 { 00:34:17.235 "name": "BaseBdev1", 00:34:17.235 "uuid": "813aa0bc-187f-40cb-aaf4-12bd3fdec7ff", 00:34:17.235 "is_configured": true, 00:34:17.235 "data_offset": 0, 00:34:17.235 "data_size": 65536 00:34:17.235 }, 00:34:17.235 { 00:34:17.236 "name": "BaseBdev2", 00:34:17.236 "uuid": "d0a059cf-5b88-415b-b7ec-3bb2f25847da", 00:34:17.236 "is_configured": true, 00:34:17.236 "data_offset": 0, 00:34:17.236 "data_size": 65536 00:34:17.236 }, 00:34:17.236 { 00:34:17.236 "name": "BaseBdev3", 00:34:17.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.236 "is_configured": false, 00:34:17.236 "data_offset": 0, 00:34:17.236 "data_size": 0 00:34:17.236 } 00:34:17.236 ] 00:34:17.236 }' 00:34:17.236 17:31:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:17.236 17:31:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.494 [2024-11-26 17:31:18.123898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:17.494 [2024-11-26 17:31:18.124049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:17.494 [2024-11-26 17:31:18.124087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:34:17.494 [2024-11-26 17:31:18.124441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:17.494 [2024-11-26 17:31:18.124699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:17.494 [2024-11-26 17:31:18.124753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:17.494 [2024-11-26 17:31:18.125102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:17.494 BaseBdev3 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.494 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.494 [ 00:34:17.494 { 00:34:17.494 "name": "BaseBdev3", 00:34:17.494 "aliases": [ 00:34:17.494 "c717c675-f1f7-46c2-ab40-ebf3beba3ce0" 00:34:17.494 ], 00:34:17.494 "product_name": "Malloc disk", 00:34:17.494 "block_size": 512, 00:34:17.494 "num_blocks": 65536, 00:34:17.494 "uuid": "c717c675-f1f7-46c2-ab40-ebf3beba3ce0", 00:34:17.494 "assigned_rate_limits": { 00:34:17.494 "rw_ios_per_sec": 0, 00:34:17.494 "rw_mbytes_per_sec": 0, 00:34:17.494 "r_mbytes_per_sec": 0, 00:34:17.494 "w_mbytes_per_sec": 0 00:34:17.494 }, 00:34:17.494 "claimed": true, 00:34:17.494 "claim_type": "exclusive_write", 00:34:17.494 "zoned": false, 00:34:17.494 "supported_io_types": { 00:34:17.494 "read": true, 00:34:17.495 "write": true, 00:34:17.495 "unmap": true, 00:34:17.495 "flush": true, 00:34:17.495 "reset": true, 00:34:17.495 "nvme_admin": false, 00:34:17.495 "nvme_io": false, 00:34:17.495 "nvme_io_md": false, 00:34:17.495 "write_zeroes": true, 00:34:17.495 "zcopy": true, 00:34:17.495 "get_zone_info": false, 00:34:17.495 "zone_management": false, 00:34:17.495 "zone_append": false, 00:34:17.495 "compare": false, 00:34:17.495 "compare_and_write": false, 00:34:17.495 "abort": true, 00:34:17.495 "seek_hole": false, 00:34:17.495 "seek_data": false, 00:34:17.495 "copy": true, 00:34:17.495 "nvme_iov_md": false 00:34:17.495 }, 00:34:17.495 "memory_domains": [ 00:34:17.495 { 00:34:17.495 "dma_device_id": "system", 00:34:17.495 "dma_device_type": 1 00:34:17.495 }, 00:34:17.495 { 00:34:17.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.495 "dma_device_type": 2 00:34:17.495 } 00:34:17.495 ], 00:34:17.495 "driver_specific": {} 00:34:17.495 } 00:34:17.495 ] 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:17.495 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.753 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:17.753 "name": "Existed_Raid", 00:34:17.753 "uuid": "abb12e32-5203-4f33-866f-915eb5253df4", 00:34:17.753 "strip_size_kb": 64, 00:34:17.753 "state": "online", 00:34:17.753 "raid_level": "concat", 00:34:17.753 "superblock": false, 00:34:17.753 "num_base_bdevs": 3, 00:34:17.753 "num_base_bdevs_discovered": 3, 00:34:17.753 "num_base_bdevs_operational": 3, 00:34:17.753 "base_bdevs_list": [ 00:34:17.753 { 00:34:17.753 "name": "BaseBdev1", 00:34:17.753 "uuid": "813aa0bc-187f-40cb-aaf4-12bd3fdec7ff", 00:34:17.753 "is_configured": true, 00:34:17.753 "data_offset": 0, 00:34:17.753 "data_size": 65536 00:34:17.753 }, 00:34:17.753 { 00:34:17.753 "name": "BaseBdev2", 00:34:17.753 "uuid": "d0a059cf-5b88-415b-b7ec-3bb2f25847da", 00:34:17.753 "is_configured": true, 00:34:17.753 "data_offset": 0, 00:34:17.753 "data_size": 65536 00:34:17.753 }, 00:34:17.753 { 00:34:17.753 "name": "BaseBdev3", 00:34:17.753 "uuid": "c717c675-f1f7-46c2-ab40-ebf3beba3ce0", 00:34:17.753 "is_configured": true, 00:34:17.753 "data_offset": 0, 00:34:17.753 "data_size": 65536 00:34:17.753 } 00:34:17.753 ] 00:34:17.753 }' 00:34:17.753 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:17.753 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.013 [2024-11-26 17:31:18.615476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.013 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:18.013 "name": "Existed_Raid", 00:34:18.013 "aliases": [ 00:34:18.013 "abb12e32-5203-4f33-866f-915eb5253df4" 00:34:18.013 ], 00:34:18.013 "product_name": "Raid Volume", 00:34:18.013 "block_size": 512, 00:34:18.013 "num_blocks": 196608, 00:34:18.013 "uuid": "abb12e32-5203-4f33-866f-915eb5253df4", 00:34:18.013 "assigned_rate_limits": { 00:34:18.013 "rw_ios_per_sec": 0, 00:34:18.013 "rw_mbytes_per_sec": 0, 00:34:18.013 "r_mbytes_per_sec": 0, 00:34:18.013 "w_mbytes_per_sec": 0 00:34:18.013 }, 00:34:18.013 "claimed": false, 00:34:18.013 "zoned": false, 00:34:18.013 "supported_io_types": { 00:34:18.013 "read": true, 00:34:18.013 "write": true, 00:34:18.013 "unmap": true, 00:34:18.013 "flush": true, 00:34:18.013 "reset": true, 00:34:18.013 "nvme_admin": false, 00:34:18.013 "nvme_io": false, 00:34:18.013 "nvme_io_md": false, 00:34:18.013 "write_zeroes": true, 00:34:18.013 "zcopy": false, 00:34:18.013 "get_zone_info": false, 00:34:18.013 "zone_management": false, 00:34:18.013 "zone_append": false, 00:34:18.013 "compare": false, 00:34:18.013 "compare_and_write": false, 00:34:18.013 "abort": false, 00:34:18.013 "seek_hole": false, 00:34:18.013 "seek_data": false, 00:34:18.013 "copy": false, 00:34:18.013 "nvme_iov_md": false 00:34:18.013 }, 00:34:18.013 "memory_domains": [ 00:34:18.013 { 00:34:18.013 "dma_device_id": "system", 00:34:18.013 "dma_device_type": 1 00:34:18.013 }, 00:34:18.013 { 00:34:18.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:18.013 "dma_device_type": 2 00:34:18.013 }, 00:34:18.013 { 00:34:18.013 "dma_device_id": "system", 00:34:18.013 "dma_device_type": 1 00:34:18.013 }, 00:34:18.013 { 00:34:18.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:18.013 "dma_device_type": 2 00:34:18.013 }, 00:34:18.013 { 00:34:18.013 "dma_device_id": "system", 00:34:18.013 "dma_device_type": 1 00:34:18.013 }, 00:34:18.013 { 00:34:18.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:18.013 "dma_device_type": 2 00:34:18.013 } 00:34:18.013 ], 00:34:18.013 "driver_specific": { 00:34:18.013 "raid": { 00:34:18.013 "uuid": "abb12e32-5203-4f33-866f-915eb5253df4", 00:34:18.013 "strip_size_kb": 64, 00:34:18.013 "state": "online", 00:34:18.013 "raid_level": "concat", 00:34:18.013 "superblock": false, 00:34:18.013 "num_base_bdevs": 3, 00:34:18.013 "num_base_bdevs_discovered": 3, 00:34:18.013 "num_base_bdevs_operational": 3, 00:34:18.013 "base_bdevs_list": [ 00:34:18.013 { 00:34:18.013 "name": "BaseBdev1", 00:34:18.013 "uuid": "813aa0bc-187f-40cb-aaf4-12bd3fdec7ff", 00:34:18.013 "is_configured": true, 00:34:18.013 "data_offset": 0, 00:34:18.013 "data_size": 65536 00:34:18.013 }, 00:34:18.013 { 00:34:18.013 "name": "BaseBdev2", 00:34:18.013 "uuid": "d0a059cf-5b88-415b-b7ec-3bb2f25847da", 00:34:18.013 "is_configured": true, 00:34:18.013 "data_offset": 0, 00:34:18.013 "data_size": 65536 00:34:18.013 }, 00:34:18.013 { 00:34:18.013 "name": "BaseBdev3", 00:34:18.013 "uuid": "c717c675-f1f7-46c2-ab40-ebf3beba3ce0", 00:34:18.013 "is_configured": true, 00:34:18.013 "data_offset": 0, 00:34:18.013 "data_size": 65536 00:34:18.013 } 00:34:18.014 ] 00:34:18.014 } 00:34:18.014 } 00:34:18.014 }' 00:34:18.014 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:18.014 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:18.014 BaseBdev2 00:34:18.014 BaseBdev3' 00:34:18.014 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.272 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.272 [2024-11-26 17:31:18.886759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:18.272 [2024-11-26 17:31:18.886793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:18.272 [2024-11-26 17:31:18.886853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:18.530 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.530 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:18.530 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.531 17:31:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.531 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.531 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:18.531 "name": "Existed_Raid", 00:34:18.531 "uuid": "abb12e32-5203-4f33-866f-915eb5253df4", 00:34:18.531 "strip_size_kb": 64, 00:34:18.531 "state": "offline", 00:34:18.531 "raid_level": "concat", 00:34:18.531 "superblock": false, 00:34:18.531 "num_base_bdevs": 3, 00:34:18.531 "num_base_bdevs_discovered": 2, 00:34:18.531 "num_base_bdevs_operational": 2, 00:34:18.531 "base_bdevs_list": [ 00:34:18.531 { 00:34:18.531 "name": null, 00:34:18.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:18.531 "is_configured": false, 00:34:18.531 "data_offset": 0, 00:34:18.531 "data_size": 65536 00:34:18.531 }, 00:34:18.531 { 00:34:18.531 "name": "BaseBdev2", 00:34:18.531 "uuid": "d0a059cf-5b88-415b-b7ec-3bb2f25847da", 00:34:18.531 "is_configured": true, 00:34:18.531 "data_offset": 0, 00:34:18.531 "data_size": 65536 00:34:18.531 }, 00:34:18.531 { 00:34:18.531 "name": "BaseBdev3", 00:34:18.531 "uuid": "c717c675-f1f7-46c2-ab40-ebf3beba3ce0", 00:34:18.531 "is_configured": true, 00:34:18.531 "data_offset": 0, 00:34:18.531 "data_size": 65536 00:34:18.531 } 00:34:18.531 ] 00:34:18.531 }' 00:34:18.531 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:18.531 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.793 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.793 [2024-11-26 17:31:19.456411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.050 [2024-11-26 17:31:19.625763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:19.050 [2024-11-26 17:31:19.625822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:19.050 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.309 BaseBdev2 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.309 [ 00:34:19.309 { 00:34:19.309 "name": "BaseBdev2", 00:34:19.309 "aliases": [ 00:34:19.309 "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2" 00:34:19.309 ], 00:34:19.309 "product_name": "Malloc disk", 00:34:19.309 "block_size": 512, 00:34:19.309 "num_blocks": 65536, 00:34:19.309 "uuid": "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2", 00:34:19.309 "assigned_rate_limits": { 00:34:19.309 "rw_ios_per_sec": 0, 00:34:19.309 "rw_mbytes_per_sec": 0, 00:34:19.309 "r_mbytes_per_sec": 0, 00:34:19.309 "w_mbytes_per_sec": 0 00:34:19.309 }, 00:34:19.309 "claimed": false, 00:34:19.309 "zoned": false, 00:34:19.309 "supported_io_types": { 00:34:19.309 "read": true, 00:34:19.309 "write": true, 00:34:19.309 "unmap": true, 00:34:19.309 "flush": true, 00:34:19.309 "reset": true, 00:34:19.309 "nvme_admin": false, 00:34:19.309 "nvme_io": false, 00:34:19.309 "nvme_io_md": false, 00:34:19.309 "write_zeroes": true, 00:34:19.309 "zcopy": true, 00:34:19.309 "get_zone_info": false, 00:34:19.309 "zone_management": false, 00:34:19.309 "zone_append": false, 00:34:19.309 "compare": false, 00:34:19.309 "compare_and_write": false, 00:34:19.309 "abort": true, 00:34:19.309 "seek_hole": false, 00:34:19.309 "seek_data": false, 00:34:19.309 "copy": true, 00:34:19.309 "nvme_iov_md": false 00:34:19.309 }, 00:34:19.309 "memory_domains": [ 00:34:19.309 { 00:34:19.309 "dma_device_id": "system", 00:34:19.309 "dma_device_type": 1 00:34:19.309 }, 00:34:19.309 { 00:34:19.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.309 "dma_device_type": 2 00:34:19.309 } 00:34:19.309 ], 00:34:19.309 "driver_specific": {} 00:34:19.309 } 00:34:19.309 ] 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.309 BaseBdev3 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:19.309 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.310 [ 00:34:19.310 { 00:34:19.310 "name": "BaseBdev3", 00:34:19.310 "aliases": [ 00:34:19.310 "17bc1368-daac-48be-89a4-3c6bbf7c8841" 00:34:19.310 ], 00:34:19.310 "product_name": "Malloc disk", 00:34:19.310 "block_size": 512, 00:34:19.310 "num_blocks": 65536, 00:34:19.310 "uuid": "17bc1368-daac-48be-89a4-3c6bbf7c8841", 00:34:19.310 "assigned_rate_limits": { 00:34:19.310 "rw_ios_per_sec": 0, 00:34:19.310 "rw_mbytes_per_sec": 0, 00:34:19.310 "r_mbytes_per_sec": 0, 00:34:19.310 "w_mbytes_per_sec": 0 00:34:19.310 }, 00:34:19.310 "claimed": false, 00:34:19.310 "zoned": false, 00:34:19.310 "supported_io_types": { 00:34:19.310 "read": true, 00:34:19.310 "write": true, 00:34:19.310 "unmap": true, 00:34:19.310 "flush": true, 00:34:19.310 "reset": true, 00:34:19.310 "nvme_admin": false, 00:34:19.310 "nvme_io": false, 00:34:19.310 "nvme_io_md": false, 00:34:19.310 "write_zeroes": true, 00:34:19.310 "zcopy": true, 00:34:19.310 "get_zone_info": false, 00:34:19.310 "zone_management": false, 00:34:19.310 "zone_append": false, 00:34:19.310 "compare": false, 00:34:19.310 "compare_and_write": false, 00:34:19.310 "abort": true, 00:34:19.310 "seek_hole": false, 00:34:19.310 "seek_data": false, 00:34:19.310 "copy": true, 00:34:19.310 "nvme_iov_md": false 00:34:19.310 }, 00:34:19.310 "memory_domains": [ 00:34:19.310 { 00:34:19.310 "dma_device_id": "system", 00:34:19.310 "dma_device_type": 1 00:34:19.310 }, 00:34:19.310 { 00:34:19.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.310 "dma_device_type": 2 00:34:19.310 } 00:34:19.310 ], 00:34:19.310 "driver_specific": {} 00:34:19.310 } 00:34:19.310 ] 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.310 [2024-11-26 17:31:19.954198] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:19.310 [2024-11-26 17:31:19.954318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:19.310 [2024-11-26 17:31:19.954382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:19.310 [2024-11-26 17:31:19.956563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:19.310 "name": "Existed_Raid", 00:34:19.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.310 "strip_size_kb": 64, 00:34:19.310 "state": "configuring", 00:34:19.310 "raid_level": "concat", 00:34:19.310 "superblock": false, 00:34:19.310 "num_base_bdevs": 3, 00:34:19.310 "num_base_bdevs_discovered": 2, 00:34:19.310 "num_base_bdevs_operational": 3, 00:34:19.310 "base_bdevs_list": [ 00:34:19.310 { 00:34:19.310 "name": "BaseBdev1", 00:34:19.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.310 "is_configured": false, 00:34:19.310 "data_offset": 0, 00:34:19.310 "data_size": 0 00:34:19.310 }, 00:34:19.310 { 00:34:19.310 "name": "BaseBdev2", 00:34:19.310 "uuid": "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2", 00:34:19.310 "is_configured": true, 00:34:19.310 "data_offset": 0, 00:34:19.310 "data_size": 65536 00:34:19.310 }, 00:34:19.310 { 00:34:19.310 "name": "BaseBdev3", 00:34:19.310 "uuid": "17bc1368-daac-48be-89a4-3c6bbf7c8841", 00:34:19.310 "is_configured": true, 00:34:19.310 "data_offset": 0, 00:34:19.310 "data_size": 65536 00:34:19.310 } 00:34:19.310 ] 00:34:19.310 }' 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:19.310 17:31:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.876 [2024-11-26 17:31:20.329593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:19.876 "name": "Existed_Raid", 00:34:19.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.876 "strip_size_kb": 64, 00:34:19.876 "state": "configuring", 00:34:19.876 "raid_level": "concat", 00:34:19.876 "superblock": false, 00:34:19.876 "num_base_bdevs": 3, 00:34:19.876 "num_base_bdevs_discovered": 1, 00:34:19.876 "num_base_bdevs_operational": 3, 00:34:19.876 "base_bdevs_list": [ 00:34:19.876 { 00:34:19.876 "name": "BaseBdev1", 00:34:19.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.876 "is_configured": false, 00:34:19.876 "data_offset": 0, 00:34:19.876 "data_size": 0 00:34:19.876 }, 00:34:19.876 { 00:34:19.876 "name": null, 00:34:19.876 "uuid": "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2", 00:34:19.876 "is_configured": false, 00:34:19.876 "data_offset": 0, 00:34:19.876 "data_size": 65536 00:34:19.876 }, 00:34:19.876 { 00:34:19.876 "name": "BaseBdev3", 00:34:19.876 "uuid": "17bc1368-daac-48be-89a4-3c6bbf7c8841", 00:34:19.876 "is_configured": true, 00:34:19.876 "data_offset": 0, 00:34:19.876 "data_size": 65536 00:34:19.876 } 00:34:19.876 ] 00:34:19.876 }' 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:19.876 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.133 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:20.133 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.133 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.133 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.133 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.133 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:34:20.133 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:20.133 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.133 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.392 [2024-11-26 17:31:20.833225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:20.392 BaseBdev1 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.392 [ 00:34:20.392 { 00:34:20.392 "name": "BaseBdev1", 00:34:20.392 "aliases": [ 00:34:20.392 "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e" 00:34:20.392 ], 00:34:20.392 "product_name": "Malloc disk", 00:34:20.392 "block_size": 512, 00:34:20.392 "num_blocks": 65536, 00:34:20.392 "uuid": "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e", 00:34:20.392 "assigned_rate_limits": { 00:34:20.392 "rw_ios_per_sec": 0, 00:34:20.392 "rw_mbytes_per_sec": 0, 00:34:20.392 "r_mbytes_per_sec": 0, 00:34:20.392 "w_mbytes_per_sec": 0 00:34:20.392 }, 00:34:20.392 "claimed": true, 00:34:20.392 "claim_type": "exclusive_write", 00:34:20.392 "zoned": false, 00:34:20.392 "supported_io_types": { 00:34:20.392 "read": true, 00:34:20.392 "write": true, 00:34:20.392 "unmap": true, 00:34:20.392 "flush": true, 00:34:20.392 "reset": true, 00:34:20.392 "nvme_admin": false, 00:34:20.392 "nvme_io": false, 00:34:20.392 "nvme_io_md": false, 00:34:20.392 "write_zeroes": true, 00:34:20.392 "zcopy": true, 00:34:20.392 "get_zone_info": false, 00:34:20.392 "zone_management": false, 00:34:20.392 "zone_append": false, 00:34:20.392 "compare": false, 00:34:20.392 "compare_and_write": false, 00:34:20.392 "abort": true, 00:34:20.392 "seek_hole": false, 00:34:20.392 "seek_data": false, 00:34:20.392 "copy": true, 00:34:20.392 "nvme_iov_md": false 00:34:20.392 }, 00:34:20.392 "memory_domains": [ 00:34:20.392 { 00:34:20.392 "dma_device_id": "system", 00:34:20.392 "dma_device_type": 1 00:34:20.392 }, 00:34:20.392 { 00:34:20.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:20.392 "dma_device_type": 2 00:34:20.392 } 00:34:20.392 ], 00:34:20.392 "driver_specific": {} 00:34:20.392 } 00:34:20.392 ] 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:20.392 "name": "Existed_Raid", 00:34:20.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.392 "strip_size_kb": 64, 00:34:20.392 "state": "configuring", 00:34:20.392 "raid_level": "concat", 00:34:20.392 "superblock": false, 00:34:20.392 "num_base_bdevs": 3, 00:34:20.392 "num_base_bdevs_discovered": 2, 00:34:20.392 "num_base_bdevs_operational": 3, 00:34:20.392 "base_bdevs_list": [ 00:34:20.392 { 00:34:20.392 "name": "BaseBdev1", 00:34:20.392 "uuid": "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e", 00:34:20.392 "is_configured": true, 00:34:20.392 "data_offset": 0, 00:34:20.392 "data_size": 65536 00:34:20.392 }, 00:34:20.392 { 00:34:20.392 "name": null, 00:34:20.392 "uuid": "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2", 00:34:20.392 "is_configured": false, 00:34:20.392 "data_offset": 0, 00:34:20.392 "data_size": 65536 00:34:20.392 }, 00:34:20.392 { 00:34:20.392 "name": "BaseBdev3", 00:34:20.392 "uuid": "17bc1368-daac-48be-89a4-3c6bbf7c8841", 00:34:20.392 "is_configured": true, 00:34:20.392 "data_offset": 0, 00:34:20.392 "data_size": 65536 00:34:20.392 } 00:34:20.392 ] 00:34:20.392 }' 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:20.392 17:31:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.651 [2024-11-26 17:31:21.264575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.651 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:20.651 "name": "Existed_Raid", 00:34:20.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.651 "strip_size_kb": 64, 00:34:20.651 "state": "configuring", 00:34:20.652 "raid_level": "concat", 00:34:20.652 "superblock": false, 00:34:20.652 "num_base_bdevs": 3, 00:34:20.652 "num_base_bdevs_discovered": 1, 00:34:20.652 "num_base_bdevs_operational": 3, 00:34:20.652 "base_bdevs_list": [ 00:34:20.652 { 00:34:20.652 "name": "BaseBdev1", 00:34:20.652 "uuid": "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e", 00:34:20.652 "is_configured": true, 00:34:20.652 "data_offset": 0, 00:34:20.652 "data_size": 65536 00:34:20.652 }, 00:34:20.652 { 00:34:20.652 "name": null, 00:34:20.652 "uuid": "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2", 00:34:20.652 "is_configured": false, 00:34:20.652 "data_offset": 0, 00:34:20.652 "data_size": 65536 00:34:20.652 }, 00:34:20.652 { 00:34:20.652 "name": null, 00:34:20.652 "uuid": "17bc1368-daac-48be-89a4-3c6bbf7c8841", 00:34:20.652 "is_configured": false, 00:34:20.652 "data_offset": 0, 00:34:20.652 "data_size": 65536 00:34:20.652 } 00:34:20.652 ] 00:34:20.652 }' 00:34:20.652 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:20.652 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.220 [2024-11-26 17:31:21.735831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.220 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:21.220 "name": "Existed_Raid", 00:34:21.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.220 "strip_size_kb": 64, 00:34:21.220 "state": "configuring", 00:34:21.220 "raid_level": "concat", 00:34:21.220 "superblock": false, 00:34:21.220 "num_base_bdevs": 3, 00:34:21.220 "num_base_bdevs_discovered": 2, 00:34:21.220 "num_base_bdevs_operational": 3, 00:34:21.220 "base_bdevs_list": [ 00:34:21.220 { 00:34:21.220 "name": "BaseBdev1", 00:34:21.220 "uuid": "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e", 00:34:21.220 "is_configured": true, 00:34:21.220 "data_offset": 0, 00:34:21.220 "data_size": 65536 00:34:21.220 }, 00:34:21.220 { 00:34:21.220 "name": null, 00:34:21.220 "uuid": "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2", 00:34:21.220 "is_configured": false, 00:34:21.220 "data_offset": 0, 00:34:21.220 "data_size": 65536 00:34:21.220 }, 00:34:21.220 { 00:34:21.220 "name": "BaseBdev3", 00:34:21.220 "uuid": "17bc1368-daac-48be-89a4-3c6bbf7c8841", 00:34:21.221 "is_configured": true, 00:34:21.221 "data_offset": 0, 00:34:21.221 "data_size": 65536 00:34:21.221 } 00:34:21.221 ] 00:34:21.221 }' 00:34:21.221 17:31:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:21.221 17:31:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.480 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:21.480 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:21.480 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.480 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.480 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.739 [2024-11-26 17:31:22.187157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:21.739 "name": "Existed_Raid", 00:34:21.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:21.739 "strip_size_kb": 64, 00:34:21.739 "state": "configuring", 00:34:21.739 "raid_level": "concat", 00:34:21.739 "superblock": false, 00:34:21.739 "num_base_bdevs": 3, 00:34:21.739 "num_base_bdevs_discovered": 1, 00:34:21.739 "num_base_bdevs_operational": 3, 00:34:21.739 "base_bdevs_list": [ 00:34:21.739 { 00:34:21.739 "name": null, 00:34:21.739 "uuid": "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e", 00:34:21.739 "is_configured": false, 00:34:21.739 "data_offset": 0, 00:34:21.739 "data_size": 65536 00:34:21.739 }, 00:34:21.739 { 00:34:21.739 "name": null, 00:34:21.739 "uuid": "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2", 00:34:21.739 "is_configured": false, 00:34:21.739 "data_offset": 0, 00:34:21.739 "data_size": 65536 00:34:21.739 }, 00:34:21.739 { 00:34:21.739 "name": "BaseBdev3", 00:34:21.739 "uuid": "17bc1368-daac-48be-89a4-3c6bbf7c8841", 00:34:21.739 "is_configured": true, 00:34:21.739 "data_offset": 0, 00:34:21.739 "data_size": 65536 00:34:21.739 } 00:34:21.739 ] 00:34:21.739 }' 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:21.739 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.999 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:21.999 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.999 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.999 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.259 [2024-11-26 17:31:22.724659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:22.259 "name": "Existed_Raid", 00:34:22.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.259 "strip_size_kb": 64, 00:34:22.259 "state": "configuring", 00:34:22.259 "raid_level": "concat", 00:34:22.259 "superblock": false, 00:34:22.259 "num_base_bdevs": 3, 00:34:22.259 "num_base_bdevs_discovered": 2, 00:34:22.259 "num_base_bdevs_operational": 3, 00:34:22.259 "base_bdevs_list": [ 00:34:22.259 { 00:34:22.259 "name": null, 00:34:22.259 "uuid": "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e", 00:34:22.259 "is_configured": false, 00:34:22.259 "data_offset": 0, 00:34:22.259 "data_size": 65536 00:34:22.259 }, 00:34:22.259 { 00:34:22.259 "name": "BaseBdev2", 00:34:22.259 "uuid": "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2", 00:34:22.259 "is_configured": true, 00:34:22.259 "data_offset": 0, 00:34:22.259 "data_size": 65536 00:34:22.259 }, 00:34:22.259 { 00:34:22.259 "name": "BaseBdev3", 00:34:22.259 "uuid": "17bc1368-daac-48be-89a4-3c6bbf7c8841", 00:34:22.259 "is_configured": true, 00:34:22.259 "data_offset": 0, 00:34:22.259 "data_size": 65536 00:34:22.259 } 00:34:22.259 ] 00:34:22.259 }' 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:22.259 17:31:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:22.520 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.520 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.520 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.520 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.520 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:34:22.520 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.520 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.520 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.520 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:22.520 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.782 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9d5fbfe3-466d-4efa-81b9-0cc43c44d65e 00:34:22.782 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.782 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.783 [2024-11-26 17:31:23.266729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:22.783 [2024-11-26 17:31:23.266901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:22.783 [2024-11-26 17:31:23.266936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:34:22.783 [2024-11-26 17:31:23.267254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:22.783 [2024-11-26 17:31:23.267470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:22.783 [2024-11-26 17:31:23.267539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:34:22.783 [2024-11-26 17:31:23.267882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:22.783 NewBaseBdev 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.783 [ 00:34:22.783 { 00:34:22.783 "name": "NewBaseBdev", 00:34:22.783 "aliases": [ 00:34:22.783 "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e" 00:34:22.783 ], 00:34:22.783 "product_name": "Malloc disk", 00:34:22.783 "block_size": 512, 00:34:22.783 "num_blocks": 65536, 00:34:22.783 "uuid": "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e", 00:34:22.783 "assigned_rate_limits": { 00:34:22.783 "rw_ios_per_sec": 0, 00:34:22.783 "rw_mbytes_per_sec": 0, 00:34:22.783 "r_mbytes_per_sec": 0, 00:34:22.783 "w_mbytes_per_sec": 0 00:34:22.783 }, 00:34:22.783 "claimed": true, 00:34:22.783 "claim_type": "exclusive_write", 00:34:22.783 "zoned": false, 00:34:22.783 "supported_io_types": { 00:34:22.783 "read": true, 00:34:22.783 "write": true, 00:34:22.783 "unmap": true, 00:34:22.783 "flush": true, 00:34:22.783 "reset": true, 00:34:22.783 "nvme_admin": false, 00:34:22.783 "nvme_io": false, 00:34:22.783 "nvme_io_md": false, 00:34:22.783 "write_zeroes": true, 00:34:22.783 "zcopy": true, 00:34:22.783 "get_zone_info": false, 00:34:22.783 "zone_management": false, 00:34:22.783 "zone_append": false, 00:34:22.783 "compare": false, 00:34:22.783 "compare_and_write": false, 00:34:22.783 "abort": true, 00:34:22.783 "seek_hole": false, 00:34:22.783 "seek_data": false, 00:34:22.783 "copy": true, 00:34:22.783 "nvme_iov_md": false 00:34:22.783 }, 00:34:22.783 "memory_domains": [ 00:34:22.783 { 00:34:22.783 "dma_device_id": "system", 00:34:22.783 "dma_device_type": 1 00:34:22.783 }, 00:34:22.783 { 00:34:22.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:22.783 "dma_device_type": 2 00:34:22.783 } 00:34:22.783 ], 00:34:22.783 "driver_specific": {} 00:34:22.783 } 00:34:22.783 ] 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:22.783 "name": "Existed_Raid", 00:34:22.783 "uuid": "ee0d4f80-2e82-4855-96c3-583570d855bf", 00:34:22.783 "strip_size_kb": 64, 00:34:22.783 "state": "online", 00:34:22.783 "raid_level": "concat", 00:34:22.783 "superblock": false, 00:34:22.783 "num_base_bdevs": 3, 00:34:22.783 "num_base_bdevs_discovered": 3, 00:34:22.783 "num_base_bdevs_operational": 3, 00:34:22.783 "base_bdevs_list": [ 00:34:22.783 { 00:34:22.783 "name": "NewBaseBdev", 00:34:22.783 "uuid": "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e", 00:34:22.783 "is_configured": true, 00:34:22.783 "data_offset": 0, 00:34:22.783 "data_size": 65536 00:34:22.783 }, 00:34:22.783 { 00:34:22.783 "name": "BaseBdev2", 00:34:22.783 "uuid": "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2", 00:34:22.783 "is_configured": true, 00:34:22.783 "data_offset": 0, 00:34:22.783 "data_size": 65536 00:34:22.783 }, 00:34:22.783 { 00:34:22.783 "name": "BaseBdev3", 00:34:22.783 "uuid": "17bc1368-daac-48be-89a4-3c6bbf7c8841", 00:34:22.783 "is_configured": true, 00:34:22.783 "data_offset": 0, 00:34:22.783 "data_size": 65536 00:34:22.783 } 00:34:22.783 ] 00:34:22.783 }' 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:22.783 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:23.353 [2024-11-26 17:31:23.762316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:23.353 "name": "Existed_Raid", 00:34:23.353 "aliases": [ 00:34:23.353 "ee0d4f80-2e82-4855-96c3-583570d855bf" 00:34:23.353 ], 00:34:23.353 "product_name": "Raid Volume", 00:34:23.353 "block_size": 512, 00:34:23.353 "num_blocks": 196608, 00:34:23.353 "uuid": "ee0d4f80-2e82-4855-96c3-583570d855bf", 00:34:23.353 "assigned_rate_limits": { 00:34:23.353 "rw_ios_per_sec": 0, 00:34:23.353 "rw_mbytes_per_sec": 0, 00:34:23.353 "r_mbytes_per_sec": 0, 00:34:23.353 "w_mbytes_per_sec": 0 00:34:23.353 }, 00:34:23.353 "claimed": false, 00:34:23.353 "zoned": false, 00:34:23.353 "supported_io_types": { 00:34:23.353 "read": true, 00:34:23.353 "write": true, 00:34:23.353 "unmap": true, 00:34:23.353 "flush": true, 00:34:23.353 "reset": true, 00:34:23.353 "nvme_admin": false, 00:34:23.353 "nvme_io": false, 00:34:23.353 "nvme_io_md": false, 00:34:23.353 "write_zeroes": true, 00:34:23.353 "zcopy": false, 00:34:23.353 "get_zone_info": false, 00:34:23.353 "zone_management": false, 00:34:23.353 "zone_append": false, 00:34:23.353 "compare": false, 00:34:23.353 "compare_and_write": false, 00:34:23.353 "abort": false, 00:34:23.353 "seek_hole": false, 00:34:23.353 "seek_data": false, 00:34:23.353 "copy": false, 00:34:23.353 "nvme_iov_md": false 00:34:23.353 }, 00:34:23.353 "memory_domains": [ 00:34:23.353 { 00:34:23.353 "dma_device_id": "system", 00:34:23.353 "dma_device_type": 1 00:34:23.353 }, 00:34:23.353 { 00:34:23.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:23.353 "dma_device_type": 2 00:34:23.353 }, 00:34:23.353 { 00:34:23.353 "dma_device_id": "system", 00:34:23.353 "dma_device_type": 1 00:34:23.353 }, 00:34:23.353 { 00:34:23.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:23.353 "dma_device_type": 2 00:34:23.353 }, 00:34:23.353 { 00:34:23.353 "dma_device_id": "system", 00:34:23.353 "dma_device_type": 1 00:34:23.353 }, 00:34:23.353 { 00:34:23.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:23.353 "dma_device_type": 2 00:34:23.353 } 00:34:23.353 ], 00:34:23.353 "driver_specific": { 00:34:23.353 "raid": { 00:34:23.353 "uuid": "ee0d4f80-2e82-4855-96c3-583570d855bf", 00:34:23.353 "strip_size_kb": 64, 00:34:23.353 "state": "online", 00:34:23.353 "raid_level": "concat", 00:34:23.353 "superblock": false, 00:34:23.353 "num_base_bdevs": 3, 00:34:23.353 "num_base_bdevs_discovered": 3, 00:34:23.353 "num_base_bdevs_operational": 3, 00:34:23.353 "base_bdevs_list": [ 00:34:23.353 { 00:34:23.353 "name": "NewBaseBdev", 00:34:23.353 "uuid": "9d5fbfe3-466d-4efa-81b9-0cc43c44d65e", 00:34:23.353 "is_configured": true, 00:34:23.353 "data_offset": 0, 00:34:23.353 "data_size": 65536 00:34:23.353 }, 00:34:23.353 { 00:34:23.353 "name": "BaseBdev2", 00:34:23.353 "uuid": "b1b11888-6d8c-4a2c-9c72-c6c14e08e4d2", 00:34:23.353 "is_configured": true, 00:34:23.353 "data_offset": 0, 00:34:23.353 "data_size": 65536 00:34:23.353 }, 00:34:23.353 { 00:34:23.353 "name": "BaseBdev3", 00:34:23.353 "uuid": "17bc1368-daac-48be-89a4-3c6bbf7c8841", 00:34:23.353 "is_configured": true, 00:34:23.353 "data_offset": 0, 00:34:23.353 "data_size": 65536 00:34:23.353 } 00:34:23.353 ] 00:34:23.353 } 00:34:23.353 } 00:34:23.353 }' 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:34:23.353 BaseBdev2 00:34:23.353 BaseBdev3' 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:23.353 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:23.354 17:31:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:23.354 [2024-11-26 17:31:24.029616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:23.354 [2024-11-26 17:31:24.029651] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:23.354 [2024-11-26 17:31:24.029755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:23.354 [2024-11-26 17:31:24.029820] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:23.354 [2024-11-26 17:31:24.029835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65848 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65848 ']' 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65848 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:34:23.354 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.614 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65848 00:34:23.614 killing process with pid 65848 00:34:23.614 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:23.614 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:23.614 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65848' 00:34:23.614 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65848 00:34:23.614 [2024-11-26 17:31:24.076763] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:23.614 17:31:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65848 00:34:23.874 [2024-11-26 17:31:24.434748] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:34:25.255 00:34:25.255 real 0m10.513s 00:34:25.255 user 0m16.462s 00:34:25.255 sys 0m1.598s 00:34:25.255 ************************************ 00:34:25.255 END TEST raid_state_function_test 00:34:25.255 ************************************ 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:25.255 17:31:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:34:25.255 17:31:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:25.255 17:31:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.255 17:31:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:25.255 ************************************ 00:34:25.255 START TEST raid_state_function_test_sb 00:34:25.255 ************************************ 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66471 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66471' 00:34:25.255 Process raid pid: 66471 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66471 00:34:25.255 17:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66471 ']' 00:34:25.256 17:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.256 17:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:25.256 17:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.256 17:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:25.256 17:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:25.256 [2024-11-26 17:31:25.935133] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:25.256 [2024-11-26 17:31:25.935270] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.515 [2024-11-26 17:31:26.107026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.775 [2024-11-26 17:31:26.242855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.035 [2024-11-26 17:31:26.481143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:26.035 [2024-11-26 17:31:26.481211] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.295 [2024-11-26 17:31:26.814746] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:26.295 [2024-11-26 17:31:26.814866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:26.295 [2024-11-26 17:31:26.814883] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:26.295 [2024-11-26 17:31:26.814894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:26.295 [2024-11-26 17:31:26.814902] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:26.295 [2024-11-26 17:31:26.814911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:26.295 "name": "Existed_Raid", 00:34:26.295 "uuid": "fcbd037c-d018-4e60-8965-b5c139a09d85", 00:34:26.295 "strip_size_kb": 64, 00:34:26.295 "state": "configuring", 00:34:26.295 "raid_level": "concat", 00:34:26.295 "superblock": true, 00:34:26.295 "num_base_bdevs": 3, 00:34:26.295 "num_base_bdevs_discovered": 0, 00:34:26.295 "num_base_bdevs_operational": 3, 00:34:26.295 "base_bdevs_list": [ 00:34:26.295 { 00:34:26.295 "name": "BaseBdev1", 00:34:26.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.295 "is_configured": false, 00:34:26.295 "data_offset": 0, 00:34:26.295 "data_size": 0 00:34:26.295 }, 00:34:26.295 { 00:34:26.295 "name": "BaseBdev2", 00:34:26.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.295 "is_configured": false, 00:34:26.295 "data_offset": 0, 00:34:26.295 "data_size": 0 00:34:26.295 }, 00:34:26.295 { 00:34:26.295 "name": "BaseBdev3", 00:34:26.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.295 "is_configured": false, 00:34:26.295 "data_offset": 0, 00:34:26.295 "data_size": 0 00:34:26.295 } 00:34:26.295 ] 00:34:26.295 }' 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:26.295 17:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.555 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:26.555 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.555 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.555 [2024-11-26 17:31:27.241974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:26.555 [2024-11-26 17:31:27.242066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:26.555 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.555 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:26.555 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.555 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.826 [2024-11-26 17:31:27.249974] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:26.826 [2024-11-26 17:31:27.250061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:26.826 [2024-11-26 17:31:27.250100] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:26.826 [2024-11-26 17:31:27.250136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:26.826 [2024-11-26 17:31:27.250173] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:26.826 [2024-11-26 17:31:27.250208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.826 [2024-11-26 17:31:27.300469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:26.826 BaseBdev1 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.826 [ 00:34:26.826 { 00:34:26.826 "name": "BaseBdev1", 00:34:26.826 "aliases": [ 00:34:26.826 "58166989-3ef5-4637-8c0e-7ad0d3403750" 00:34:26.826 ], 00:34:26.826 "product_name": "Malloc disk", 00:34:26.826 "block_size": 512, 00:34:26.826 "num_blocks": 65536, 00:34:26.826 "uuid": "58166989-3ef5-4637-8c0e-7ad0d3403750", 00:34:26.826 "assigned_rate_limits": { 00:34:26.826 "rw_ios_per_sec": 0, 00:34:26.826 "rw_mbytes_per_sec": 0, 00:34:26.826 "r_mbytes_per_sec": 0, 00:34:26.826 "w_mbytes_per_sec": 0 00:34:26.826 }, 00:34:26.826 "claimed": true, 00:34:26.826 "claim_type": "exclusive_write", 00:34:26.826 "zoned": false, 00:34:26.826 "supported_io_types": { 00:34:26.826 "read": true, 00:34:26.826 "write": true, 00:34:26.826 "unmap": true, 00:34:26.826 "flush": true, 00:34:26.826 "reset": true, 00:34:26.826 "nvme_admin": false, 00:34:26.826 "nvme_io": false, 00:34:26.826 "nvme_io_md": false, 00:34:26.826 "write_zeroes": true, 00:34:26.826 "zcopy": true, 00:34:26.826 "get_zone_info": false, 00:34:26.826 "zone_management": false, 00:34:26.826 "zone_append": false, 00:34:26.826 "compare": false, 00:34:26.826 "compare_and_write": false, 00:34:26.826 "abort": true, 00:34:26.826 "seek_hole": false, 00:34:26.826 "seek_data": false, 00:34:26.826 "copy": true, 00:34:26.826 "nvme_iov_md": false 00:34:26.826 }, 00:34:26.826 "memory_domains": [ 00:34:26.826 { 00:34:26.826 "dma_device_id": "system", 00:34:26.826 "dma_device_type": 1 00:34:26.826 }, 00:34:26.826 { 00:34:26.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:26.826 "dma_device_type": 2 00:34:26.826 } 00:34:26.826 ], 00:34:26.826 "driver_specific": {} 00:34:26.826 } 00:34:26.826 ] 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:26.826 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:26.827 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:26.827 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.827 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.827 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.827 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.827 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:26.827 "name": "Existed_Raid", 00:34:26.827 "uuid": "bc9560aa-73f5-44c8-af87-e61fd87a8e64", 00:34:26.827 "strip_size_kb": 64, 00:34:26.827 "state": "configuring", 00:34:26.827 "raid_level": "concat", 00:34:26.827 "superblock": true, 00:34:26.827 "num_base_bdevs": 3, 00:34:26.827 "num_base_bdevs_discovered": 1, 00:34:26.827 "num_base_bdevs_operational": 3, 00:34:26.827 "base_bdevs_list": [ 00:34:26.827 { 00:34:26.827 "name": "BaseBdev1", 00:34:26.827 "uuid": "58166989-3ef5-4637-8c0e-7ad0d3403750", 00:34:26.827 "is_configured": true, 00:34:26.827 "data_offset": 2048, 00:34:26.827 "data_size": 63488 00:34:26.827 }, 00:34:26.827 { 00:34:26.827 "name": "BaseBdev2", 00:34:26.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.827 "is_configured": false, 00:34:26.827 "data_offset": 0, 00:34:26.827 "data_size": 0 00:34:26.827 }, 00:34:26.827 { 00:34:26.827 "name": "BaseBdev3", 00:34:26.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.827 "is_configured": false, 00:34:26.827 "data_offset": 0, 00:34:26.827 "data_size": 0 00:34:26.827 } 00:34:26.827 ] 00:34:26.827 }' 00:34:26.827 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:26.827 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.086 [2024-11-26 17:31:27.739836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:27.086 [2024-11-26 17:31:27.739955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.086 [2024-11-26 17:31:27.751918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:27.086 [2024-11-26 17:31:27.754071] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:27.086 [2024-11-26 17:31:27.754157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:27.086 [2024-11-26 17:31:27.754198] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:27.086 [2024-11-26 17:31:27.754236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:27.086 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.087 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.346 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.346 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:27.346 "name": "Existed_Raid", 00:34:27.346 "uuid": "7c426a1a-f6c7-47e2-9ff4-ccb04cce05e0", 00:34:27.346 "strip_size_kb": 64, 00:34:27.346 "state": "configuring", 00:34:27.346 "raid_level": "concat", 00:34:27.346 "superblock": true, 00:34:27.346 "num_base_bdevs": 3, 00:34:27.346 "num_base_bdevs_discovered": 1, 00:34:27.346 "num_base_bdevs_operational": 3, 00:34:27.346 "base_bdevs_list": [ 00:34:27.346 { 00:34:27.346 "name": "BaseBdev1", 00:34:27.346 "uuid": "58166989-3ef5-4637-8c0e-7ad0d3403750", 00:34:27.346 "is_configured": true, 00:34:27.346 "data_offset": 2048, 00:34:27.346 "data_size": 63488 00:34:27.346 }, 00:34:27.346 { 00:34:27.346 "name": "BaseBdev2", 00:34:27.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.346 "is_configured": false, 00:34:27.346 "data_offset": 0, 00:34:27.346 "data_size": 0 00:34:27.346 }, 00:34:27.346 { 00:34:27.346 "name": "BaseBdev3", 00:34:27.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.346 "is_configured": false, 00:34:27.346 "data_offset": 0, 00:34:27.346 "data_size": 0 00:34:27.346 } 00:34:27.346 ] 00:34:27.346 }' 00:34:27.347 17:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:27.347 17:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.607 [2024-11-26 17:31:28.198412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:27.607 BaseBdev2 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.607 [ 00:34:27.607 { 00:34:27.607 "name": "BaseBdev2", 00:34:27.607 "aliases": [ 00:34:27.607 "fd7a8d54-aa6d-49df-be4b-fdb58131ce1b" 00:34:27.607 ], 00:34:27.607 "product_name": "Malloc disk", 00:34:27.607 "block_size": 512, 00:34:27.607 "num_blocks": 65536, 00:34:27.607 "uuid": "fd7a8d54-aa6d-49df-be4b-fdb58131ce1b", 00:34:27.607 "assigned_rate_limits": { 00:34:27.607 "rw_ios_per_sec": 0, 00:34:27.607 "rw_mbytes_per_sec": 0, 00:34:27.607 "r_mbytes_per_sec": 0, 00:34:27.607 "w_mbytes_per_sec": 0 00:34:27.607 }, 00:34:27.607 "claimed": true, 00:34:27.607 "claim_type": "exclusive_write", 00:34:27.607 "zoned": false, 00:34:27.607 "supported_io_types": { 00:34:27.607 "read": true, 00:34:27.607 "write": true, 00:34:27.607 "unmap": true, 00:34:27.607 "flush": true, 00:34:27.607 "reset": true, 00:34:27.607 "nvme_admin": false, 00:34:27.607 "nvme_io": false, 00:34:27.607 "nvme_io_md": false, 00:34:27.607 "write_zeroes": true, 00:34:27.607 "zcopy": true, 00:34:27.607 "get_zone_info": false, 00:34:27.607 "zone_management": false, 00:34:27.607 "zone_append": false, 00:34:27.607 "compare": false, 00:34:27.607 "compare_and_write": false, 00:34:27.607 "abort": true, 00:34:27.607 "seek_hole": false, 00:34:27.607 "seek_data": false, 00:34:27.607 "copy": true, 00:34:27.607 "nvme_iov_md": false 00:34:27.607 }, 00:34:27.607 "memory_domains": [ 00:34:27.607 { 00:34:27.607 "dma_device_id": "system", 00:34:27.607 "dma_device_type": 1 00:34:27.607 }, 00:34:27.607 { 00:34:27.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:27.607 "dma_device_type": 2 00:34:27.607 } 00:34:27.607 ], 00:34:27.607 "driver_specific": {} 00:34:27.607 } 00:34:27.607 ] 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:27.607 "name": "Existed_Raid", 00:34:27.607 "uuid": "7c426a1a-f6c7-47e2-9ff4-ccb04cce05e0", 00:34:27.607 "strip_size_kb": 64, 00:34:27.607 "state": "configuring", 00:34:27.607 "raid_level": "concat", 00:34:27.607 "superblock": true, 00:34:27.607 "num_base_bdevs": 3, 00:34:27.607 "num_base_bdevs_discovered": 2, 00:34:27.607 "num_base_bdevs_operational": 3, 00:34:27.607 "base_bdevs_list": [ 00:34:27.607 { 00:34:27.607 "name": "BaseBdev1", 00:34:27.607 "uuid": "58166989-3ef5-4637-8c0e-7ad0d3403750", 00:34:27.607 "is_configured": true, 00:34:27.607 "data_offset": 2048, 00:34:27.607 "data_size": 63488 00:34:27.607 }, 00:34:27.607 { 00:34:27.607 "name": "BaseBdev2", 00:34:27.607 "uuid": "fd7a8d54-aa6d-49df-be4b-fdb58131ce1b", 00:34:27.607 "is_configured": true, 00:34:27.607 "data_offset": 2048, 00:34:27.607 "data_size": 63488 00:34:27.607 }, 00:34:27.607 { 00:34:27.607 "name": "BaseBdev3", 00:34:27.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.607 "is_configured": false, 00:34:27.607 "data_offset": 0, 00:34:27.607 "data_size": 0 00:34:27.607 } 00:34:27.607 ] 00:34:27.607 }' 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:27.607 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.177 [2024-11-26 17:31:28.733067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:28.177 [2024-11-26 17:31:28.733439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:28.177 [2024-11-26 17:31:28.733504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:28.177 BaseBdev3 00:34:28.177 [2024-11-26 17:31:28.733880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:28.177 [2024-11-26 17:31:28.734067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:28.177 [2024-11-26 17:31:28.734125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:28.177 [2024-11-26 17:31:28.734342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.177 [ 00:34:28.177 { 00:34:28.177 "name": "BaseBdev3", 00:34:28.177 "aliases": [ 00:34:28.177 "2032e5e9-0f71-44ba-910d-67d13cfa3d57" 00:34:28.177 ], 00:34:28.177 "product_name": "Malloc disk", 00:34:28.177 "block_size": 512, 00:34:28.177 "num_blocks": 65536, 00:34:28.177 "uuid": "2032e5e9-0f71-44ba-910d-67d13cfa3d57", 00:34:28.177 "assigned_rate_limits": { 00:34:28.177 "rw_ios_per_sec": 0, 00:34:28.177 "rw_mbytes_per_sec": 0, 00:34:28.177 "r_mbytes_per_sec": 0, 00:34:28.177 "w_mbytes_per_sec": 0 00:34:28.177 }, 00:34:28.177 "claimed": true, 00:34:28.177 "claim_type": "exclusive_write", 00:34:28.177 "zoned": false, 00:34:28.177 "supported_io_types": { 00:34:28.177 "read": true, 00:34:28.177 "write": true, 00:34:28.177 "unmap": true, 00:34:28.177 "flush": true, 00:34:28.177 "reset": true, 00:34:28.177 "nvme_admin": false, 00:34:28.177 "nvme_io": false, 00:34:28.177 "nvme_io_md": false, 00:34:28.177 "write_zeroes": true, 00:34:28.177 "zcopy": true, 00:34:28.177 "get_zone_info": false, 00:34:28.177 "zone_management": false, 00:34:28.177 "zone_append": false, 00:34:28.177 "compare": false, 00:34:28.177 "compare_and_write": false, 00:34:28.177 "abort": true, 00:34:28.177 "seek_hole": false, 00:34:28.177 "seek_data": false, 00:34:28.177 "copy": true, 00:34:28.177 "nvme_iov_md": false 00:34:28.177 }, 00:34:28.177 "memory_domains": [ 00:34:28.177 { 00:34:28.177 "dma_device_id": "system", 00:34:28.177 "dma_device_type": 1 00:34:28.177 }, 00:34:28.177 { 00:34:28.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:28.177 "dma_device_type": 2 00:34:28.177 } 00:34:28.177 ], 00:34:28.177 "driver_specific": {} 00:34:28.177 } 00:34:28.177 ] 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:28.177 "name": "Existed_Raid", 00:34:28.177 "uuid": "7c426a1a-f6c7-47e2-9ff4-ccb04cce05e0", 00:34:28.177 "strip_size_kb": 64, 00:34:28.177 "state": "online", 00:34:28.177 "raid_level": "concat", 00:34:28.177 "superblock": true, 00:34:28.177 "num_base_bdevs": 3, 00:34:28.177 "num_base_bdevs_discovered": 3, 00:34:28.177 "num_base_bdevs_operational": 3, 00:34:28.177 "base_bdevs_list": [ 00:34:28.177 { 00:34:28.177 "name": "BaseBdev1", 00:34:28.177 "uuid": "58166989-3ef5-4637-8c0e-7ad0d3403750", 00:34:28.177 "is_configured": true, 00:34:28.177 "data_offset": 2048, 00:34:28.177 "data_size": 63488 00:34:28.177 }, 00:34:28.177 { 00:34:28.177 "name": "BaseBdev2", 00:34:28.177 "uuid": "fd7a8d54-aa6d-49df-be4b-fdb58131ce1b", 00:34:28.177 "is_configured": true, 00:34:28.177 "data_offset": 2048, 00:34:28.177 "data_size": 63488 00:34:28.177 }, 00:34:28.177 { 00:34:28.177 "name": "BaseBdev3", 00:34:28.177 "uuid": "2032e5e9-0f71-44ba-910d-67d13cfa3d57", 00:34:28.177 "is_configured": true, 00:34:28.177 "data_offset": 2048, 00:34:28.177 "data_size": 63488 00:34:28.177 } 00:34:28.177 ] 00:34:28.177 }' 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:28.177 17:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:28.748 [2024-11-26 17:31:29.144819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:28.748 "name": "Existed_Raid", 00:34:28.748 "aliases": [ 00:34:28.748 "7c426a1a-f6c7-47e2-9ff4-ccb04cce05e0" 00:34:28.748 ], 00:34:28.748 "product_name": "Raid Volume", 00:34:28.748 "block_size": 512, 00:34:28.748 "num_blocks": 190464, 00:34:28.748 "uuid": "7c426a1a-f6c7-47e2-9ff4-ccb04cce05e0", 00:34:28.748 "assigned_rate_limits": { 00:34:28.748 "rw_ios_per_sec": 0, 00:34:28.748 "rw_mbytes_per_sec": 0, 00:34:28.748 "r_mbytes_per_sec": 0, 00:34:28.748 "w_mbytes_per_sec": 0 00:34:28.748 }, 00:34:28.748 "claimed": false, 00:34:28.748 "zoned": false, 00:34:28.748 "supported_io_types": { 00:34:28.748 "read": true, 00:34:28.748 "write": true, 00:34:28.748 "unmap": true, 00:34:28.748 "flush": true, 00:34:28.748 "reset": true, 00:34:28.748 "nvme_admin": false, 00:34:28.748 "nvme_io": false, 00:34:28.748 "nvme_io_md": false, 00:34:28.748 "write_zeroes": true, 00:34:28.748 "zcopy": false, 00:34:28.748 "get_zone_info": false, 00:34:28.748 "zone_management": false, 00:34:28.748 "zone_append": false, 00:34:28.748 "compare": false, 00:34:28.748 "compare_and_write": false, 00:34:28.748 "abort": false, 00:34:28.748 "seek_hole": false, 00:34:28.748 "seek_data": false, 00:34:28.748 "copy": false, 00:34:28.748 "nvme_iov_md": false 00:34:28.748 }, 00:34:28.748 "memory_domains": [ 00:34:28.748 { 00:34:28.748 "dma_device_id": "system", 00:34:28.748 "dma_device_type": 1 00:34:28.748 }, 00:34:28.748 { 00:34:28.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:28.748 "dma_device_type": 2 00:34:28.748 }, 00:34:28.748 { 00:34:28.748 "dma_device_id": "system", 00:34:28.748 "dma_device_type": 1 00:34:28.748 }, 00:34:28.748 { 00:34:28.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:28.748 "dma_device_type": 2 00:34:28.748 }, 00:34:28.748 { 00:34:28.748 "dma_device_id": "system", 00:34:28.748 "dma_device_type": 1 00:34:28.748 }, 00:34:28.748 { 00:34:28.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:28.748 "dma_device_type": 2 00:34:28.748 } 00:34:28.748 ], 00:34:28.748 "driver_specific": { 00:34:28.748 "raid": { 00:34:28.748 "uuid": "7c426a1a-f6c7-47e2-9ff4-ccb04cce05e0", 00:34:28.748 "strip_size_kb": 64, 00:34:28.748 "state": "online", 00:34:28.748 "raid_level": "concat", 00:34:28.748 "superblock": true, 00:34:28.748 "num_base_bdevs": 3, 00:34:28.748 "num_base_bdevs_discovered": 3, 00:34:28.748 "num_base_bdevs_operational": 3, 00:34:28.748 "base_bdevs_list": [ 00:34:28.748 { 00:34:28.748 "name": "BaseBdev1", 00:34:28.748 "uuid": "58166989-3ef5-4637-8c0e-7ad0d3403750", 00:34:28.748 "is_configured": true, 00:34:28.748 "data_offset": 2048, 00:34:28.748 "data_size": 63488 00:34:28.748 }, 00:34:28.748 { 00:34:28.748 "name": "BaseBdev2", 00:34:28.748 "uuid": "fd7a8d54-aa6d-49df-be4b-fdb58131ce1b", 00:34:28.748 "is_configured": true, 00:34:28.748 "data_offset": 2048, 00:34:28.748 "data_size": 63488 00:34:28.748 }, 00:34:28.748 { 00:34:28.748 "name": "BaseBdev3", 00:34:28.748 "uuid": "2032e5e9-0f71-44ba-910d-67d13cfa3d57", 00:34:28.748 "is_configured": true, 00:34:28.748 "data_offset": 2048, 00:34:28.748 "data_size": 63488 00:34:28.748 } 00:34:28.748 ] 00:34:28.748 } 00:34:28.748 } 00:34:28.748 }' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:28.748 BaseBdev2 00:34:28.748 BaseBdev3' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.748 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.748 [2024-11-26 17:31:29.364133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:28.748 [2024-11-26 17:31:29.364165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:28.748 [2024-11-26 17:31:29.364226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:29.007 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.007 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:29.007 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:29.008 "name": "Existed_Raid", 00:34:29.008 "uuid": "7c426a1a-f6c7-47e2-9ff4-ccb04cce05e0", 00:34:29.008 "strip_size_kb": 64, 00:34:29.008 "state": "offline", 00:34:29.008 "raid_level": "concat", 00:34:29.008 "superblock": true, 00:34:29.008 "num_base_bdevs": 3, 00:34:29.008 "num_base_bdevs_discovered": 2, 00:34:29.008 "num_base_bdevs_operational": 2, 00:34:29.008 "base_bdevs_list": [ 00:34:29.008 { 00:34:29.008 "name": null, 00:34:29.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.008 "is_configured": false, 00:34:29.008 "data_offset": 0, 00:34:29.008 "data_size": 63488 00:34:29.008 }, 00:34:29.008 { 00:34:29.008 "name": "BaseBdev2", 00:34:29.008 "uuid": "fd7a8d54-aa6d-49df-be4b-fdb58131ce1b", 00:34:29.008 "is_configured": true, 00:34:29.008 "data_offset": 2048, 00:34:29.008 "data_size": 63488 00:34:29.008 }, 00:34:29.008 { 00:34:29.008 "name": "BaseBdev3", 00:34:29.008 "uuid": "2032e5e9-0f71-44ba-910d-67d13cfa3d57", 00:34:29.008 "is_configured": true, 00:34:29.008 "data_offset": 2048, 00:34:29.008 "data_size": 63488 00:34:29.008 } 00:34:29.008 ] 00:34:29.008 }' 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:29.008 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.267 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:29.267 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:29.267 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.267 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.267 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.267 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:29.267 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.527 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:29.527 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:29.527 17:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:29.527 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.527 17:31:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.527 [2024-11-26 17:31:29.991472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.527 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.527 [2024-11-26 17:31:30.145169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:29.527 [2024-11-26 17:31:30.145224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.787 BaseBdev2 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.787 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.787 [ 00:34:29.787 { 00:34:29.787 "name": "BaseBdev2", 00:34:29.788 "aliases": [ 00:34:29.788 "24c060ba-2a26-464d-9ecf-d22d9b19be2f" 00:34:29.788 ], 00:34:29.788 "product_name": "Malloc disk", 00:34:29.788 "block_size": 512, 00:34:29.788 "num_blocks": 65536, 00:34:29.788 "uuid": "24c060ba-2a26-464d-9ecf-d22d9b19be2f", 00:34:29.788 "assigned_rate_limits": { 00:34:29.788 "rw_ios_per_sec": 0, 00:34:29.788 "rw_mbytes_per_sec": 0, 00:34:29.788 "r_mbytes_per_sec": 0, 00:34:29.788 "w_mbytes_per_sec": 0 00:34:29.788 }, 00:34:29.788 "claimed": false, 00:34:29.788 "zoned": false, 00:34:29.788 "supported_io_types": { 00:34:29.788 "read": true, 00:34:29.788 "write": true, 00:34:29.788 "unmap": true, 00:34:29.788 "flush": true, 00:34:29.788 "reset": true, 00:34:29.788 "nvme_admin": false, 00:34:29.788 "nvme_io": false, 00:34:29.788 "nvme_io_md": false, 00:34:29.788 "write_zeroes": true, 00:34:29.788 "zcopy": true, 00:34:29.788 "get_zone_info": false, 00:34:29.788 "zone_management": false, 00:34:29.788 "zone_append": false, 00:34:29.788 "compare": false, 00:34:29.788 "compare_and_write": false, 00:34:29.788 "abort": true, 00:34:29.788 "seek_hole": false, 00:34:29.788 "seek_data": false, 00:34:29.788 "copy": true, 00:34:29.788 "nvme_iov_md": false 00:34:29.788 }, 00:34:29.788 "memory_domains": [ 00:34:29.788 { 00:34:29.788 "dma_device_id": "system", 00:34:29.788 "dma_device_type": 1 00:34:29.788 }, 00:34:29.788 { 00:34:29.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:29.788 "dma_device_type": 2 00:34:29.788 } 00:34:29.788 ], 00:34:29.788 "driver_specific": {} 00:34:29.788 } 00:34:29.788 ] 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.788 BaseBdev3 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.788 [ 00:34:29.788 { 00:34:29.788 "name": "BaseBdev3", 00:34:29.788 "aliases": [ 00:34:29.788 "fa45b043-aa46-455d-8149-205df16677dc" 00:34:29.788 ], 00:34:29.788 "product_name": "Malloc disk", 00:34:29.788 "block_size": 512, 00:34:29.788 "num_blocks": 65536, 00:34:29.788 "uuid": "fa45b043-aa46-455d-8149-205df16677dc", 00:34:29.788 "assigned_rate_limits": { 00:34:29.788 "rw_ios_per_sec": 0, 00:34:29.788 "rw_mbytes_per_sec": 0, 00:34:29.788 "r_mbytes_per_sec": 0, 00:34:29.788 "w_mbytes_per_sec": 0 00:34:29.788 }, 00:34:29.788 "claimed": false, 00:34:29.788 "zoned": false, 00:34:29.788 "supported_io_types": { 00:34:29.788 "read": true, 00:34:29.788 "write": true, 00:34:29.788 "unmap": true, 00:34:29.788 "flush": true, 00:34:29.788 "reset": true, 00:34:29.788 "nvme_admin": false, 00:34:29.788 "nvme_io": false, 00:34:29.788 "nvme_io_md": false, 00:34:29.788 "write_zeroes": true, 00:34:29.788 "zcopy": true, 00:34:29.788 "get_zone_info": false, 00:34:29.788 "zone_management": false, 00:34:29.788 "zone_append": false, 00:34:29.788 "compare": false, 00:34:29.788 "compare_and_write": false, 00:34:29.788 "abort": true, 00:34:29.788 "seek_hole": false, 00:34:29.788 "seek_data": false, 00:34:29.788 "copy": true, 00:34:29.788 "nvme_iov_md": false 00:34:29.788 }, 00:34:29.788 "memory_domains": [ 00:34:29.788 { 00:34:29.788 "dma_device_id": "system", 00:34:29.788 "dma_device_type": 1 00:34:29.788 }, 00:34:29.788 { 00:34:29.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:29.788 "dma_device_type": 2 00:34:29.788 } 00:34:29.788 ], 00:34:29.788 "driver_specific": {} 00:34:29.788 } 00:34:29.788 ] 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.788 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.048 [2024-11-26 17:31:30.482456] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:30.048 [2024-11-26 17:31:30.482594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:30.048 [2024-11-26 17:31:30.482671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:30.048 [2024-11-26 17:31:30.484941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:30.048 "name": "Existed_Raid", 00:34:30.048 "uuid": "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e", 00:34:30.048 "strip_size_kb": 64, 00:34:30.048 "state": "configuring", 00:34:30.048 "raid_level": "concat", 00:34:30.048 "superblock": true, 00:34:30.048 "num_base_bdevs": 3, 00:34:30.048 "num_base_bdevs_discovered": 2, 00:34:30.048 "num_base_bdevs_operational": 3, 00:34:30.048 "base_bdevs_list": [ 00:34:30.048 { 00:34:30.048 "name": "BaseBdev1", 00:34:30.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:30.048 "is_configured": false, 00:34:30.048 "data_offset": 0, 00:34:30.048 "data_size": 0 00:34:30.048 }, 00:34:30.048 { 00:34:30.048 "name": "BaseBdev2", 00:34:30.048 "uuid": "24c060ba-2a26-464d-9ecf-d22d9b19be2f", 00:34:30.048 "is_configured": true, 00:34:30.048 "data_offset": 2048, 00:34:30.048 "data_size": 63488 00:34:30.048 }, 00:34:30.048 { 00:34:30.048 "name": "BaseBdev3", 00:34:30.048 "uuid": "fa45b043-aa46-455d-8149-205df16677dc", 00:34:30.048 "is_configured": true, 00:34:30.048 "data_offset": 2048, 00:34:30.048 "data_size": 63488 00:34:30.048 } 00:34:30.048 ] 00:34:30.048 }' 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:30.048 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.308 [2024-11-26 17:31:30.909741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:30.308 "name": "Existed_Raid", 00:34:30.308 "uuid": "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e", 00:34:30.308 "strip_size_kb": 64, 00:34:30.308 "state": "configuring", 00:34:30.308 "raid_level": "concat", 00:34:30.308 "superblock": true, 00:34:30.308 "num_base_bdevs": 3, 00:34:30.308 "num_base_bdevs_discovered": 1, 00:34:30.308 "num_base_bdevs_operational": 3, 00:34:30.308 "base_bdevs_list": [ 00:34:30.308 { 00:34:30.308 "name": "BaseBdev1", 00:34:30.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:30.308 "is_configured": false, 00:34:30.308 "data_offset": 0, 00:34:30.308 "data_size": 0 00:34:30.308 }, 00:34:30.308 { 00:34:30.308 "name": null, 00:34:30.308 "uuid": "24c060ba-2a26-464d-9ecf-d22d9b19be2f", 00:34:30.308 "is_configured": false, 00:34:30.308 "data_offset": 0, 00:34:30.308 "data_size": 63488 00:34:30.308 }, 00:34:30.308 { 00:34:30.308 "name": "BaseBdev3", 00:34:30.308 "uuid": "fa45b043-aa46-455d-8149-205df16677dc", 00:34:30.308 "is_configured": true, 00:34:30.308 "data_offset": 2048, 00:34:30.308 "data_size": 63488 00:34:30.308 } 00:34:30.308 ] 00:34:30.308 }' 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:30.308 17:31:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.878 [2024-11-26 17:31:31.426223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:30.878 BaseBdev1 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.878 [ 00:34:30.878 { 00:34:30.878 "name": "BaseBdev1", 00:34:30.878 "aliases": [ 00:34:30.878 "fe98c76d-826c-489a-9bbb-b952adf68e0d" 00:34:30.878 ], 00:34:30.878 "product_name": "Malloc disk", 00:34:30.878 "block_size": 512, 00:34:30.878 "num_blocks": 65536, 00:34:30.878 "uuid": "fe98c76d-826c-489a-9bbb-b952adf68e0d", 00:34:30.878 "assigned_rate_limits": { 00:34:30.878 "rw_ios_per_sec": 0, 00:34:30.878 "rw_mbytes_per_sec": 0, 00:34:30.878 "r_mbytes_per_sec": 0, 00:34:30.878 "w_mbytes_per_sec": 0 00:34:30.878 }, 00:34:30.878 "claimed": true, 00:34:30.878 "claim_type": "exclusive_write", 00:34:30.878 "zoned": false, 00:34:30.878 "supported_io_types": { 00:34:30.878 "read": true, 00:34:30.878 "write": true, 00:34:30.878 "unmap": true, 00:34:30.878 "flush": true, 00:34:30.878 "reset": true, 00:34:30.878 "nvme_admin": false, 00:34:30.878 "nvme_io": false, 00:34:30.878 "nvme_io_md": false, 00:34:30.878 "write_zeroes": true, 00:34:30.878 "zcopy": true, 00:34:30.878 "get_zone_info": false, 00:34:30.878 "zone_management": false, 00:34:30.878 "zone_append": false, 00:34:30.878 "compare": false, 00:34:30.878 "compare_and_write": false, 00:34:30.878 "abort": true, 00:34:30.878 "seek_hole": false, 00:34:30.878 "seek_data": false, 00:34:30.878 "copy": true, 00:34:30.878 "nvme_iov_md": false 00:34:30.878 }, 00:34:30.878 "memory_domains": [ 00:34:30.878 { 00:34:30.878 "dma_device_id": "system", 00:34:30.878 "dma_device_type": 1 00:34:30.878 }, 00:34:30.878 { 00:34:30.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:30.878 "dma_device_type": 2 00:34:30.878 } 00:34:30.878 ], 00:34:30.878 "driver_specific": {} 00:34:30.878 } 00:34:30.878 ] 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.878 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:30.878 "name": "Existed_Raid", 00:34:30.878 "uuid": "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e", 00:34:30.878 "strip_size_kb": 64, 00:34:30.878 "state": "configuring", 00:34:30.878 "raid_level": "concat", 00:34:30.878 "superblock": true, 00:34:30.878 "num_base_bdevs": 3, 00:34:30.878 "num_base_bdevs_discovered": 2, 00:34:30.878 "num_base_bdevs_operational": 3, 00:34:30.878 "base_bdevs_list": [ 00:34:30.879 { 00:34:30.879 "name": "BaseBdev1", 00:34:30.879 "uuid": "fe98c76d-826c-489a-9bbb-b952adf68e0d", 00:34:30.879 "is_configured": true, 00:34:30.879 "data_offset": 2048, 00:34:30.879 "data_size": 63488 00:34:30.879 }, 00:34:30.879 { 00:34:30.879 "name": null, 00:34:30.879 "uuid": "24c060ba-2a26-464d-9ecf-d22d9b19be2f", 00:34:30.879 "is_configured": false, 00:34:30.879 "data_offset": 0, 00:34:30.879 "data_size": 63488 00:34:30.879 }, 00:34:30.879 { 00:34:30.879 "name": "BaseBdev3", 00:34:30.879 "uuid": "fa45b043-aa46-455d-8149-205df16677dc", 00:34:30.879 "is_configured": true, 00:34:30.879 "data_offset": 2048, 00:34:30.879 "data_size": 63488 00:34:30.879 } 00:34:30.879 ] 00:34:30.879 }' 00:34:30.879 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:30.879 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.478 [2024-11-26 17:31:31.917473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:31.478 "name": "Existed_Raid", 00:34:31.478 "uuid": "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e", 00:34:31.478 "strip_size_kb": 64, 00:34:31.478 "state": "configuring", 00:34:31.478 "raid_level": "concat", 00:34:31.478 "superblock": true, 00:34:31.478 "num_base_bdevs": 3, 00:34:31.478 "num_base_bdevs_discovered": 1, 00:34:31.478 "num_base_bdevs_operational": 3, 00:34:31.478 "base_bdevs_list": [ 00:34:31.478 { 00:34:31.478 "name": "BaseBdev1", 00:34:31.478 "uuid": "fe98c76d-826c-489a-9bbb-b952adf68e0d", 00:34:31.478 "is_configured": true, 00:34:31.478 "data_offset": 2048, 00:34:31.478 "data_size": 63488 00:34:31.478 }, 00:34:31.478 { 00:34:31.478 "name": null, 00:34:31.478 "uuid": "24c060ba-2a26-464d-9ecf-d22d9b19be2f", 00:34:31.478 "is_configured": false, 00:34:31.478 "data_offset": 0, 00:34:31.478 "data_size": 63488 00:34:31.478 }, 00:34:31.478 { 00:34:31.478 "name": null, 00:34:31.478 "uuid": "fa45b043-aa46-455d-8149-205df16677dc", 00:34:31.478 "is_configured": false, 00:34:31.478 "data_offset": 0, 00:34:31.478 "data_size": 63488 00:34:31.478 } 00:34:31.478 ] 00:34:31.478 }' 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:31.478 17:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.737 [2024-11-26 17:31:32.380771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:31.737 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.996 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:31.996 "name": "Existed_Raid", 00:34:31.996 "uuid": "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e", 00:34:31.996 "strip_size_kb": 64, 00:34:31.996 "state": "configuring", 00:34:31.996 "raid_level": "concat", 00:34:31.996 "superblock": true, 00:34:31.996 "num_base_bdevs": 3, 00:34:31.996 "num_base_bdevs_discovered": 2, 00:34:31.996 "num_base_bdevs_operational": 3, 00:34:31.996 "base_bdevs_list": [ 00:34:31.996 { 00:34:31.996 "name": "BaseBdev1", 00:34:31.996 "uuid": "fe98c76d-826c-489a-9bbb-b952adf68e0d", 00:34:31.996 "is_configured": true, 00:34:31.996 "data_offset": 2048, 00:34:31.996 "data_size": 63488 00:34:31.996 }, 00:34:31.996 { 00:34:31.996 "name": null, 00:34:31.996 "uuid": "24c060ba-2a26-464d-9ecf-d22d9b19be2f", 00:34:31.996 "is_configured": false, 00:34:31.996 "data_offset": 0, 00:34:31.996 "data_size": 63488 00:34:31.996 }, 00:34:31.996 { 00:34:31.996 "name": "BaseBdev3", 00:34:31.996 "uuid": "fa45b043-aa46-455d-8149-205df16677dc", 00:34:31.996 "is_configured": true, 00:34:31.996 "data_offset": 2048, 00:34:31.996 "data_size": 63488 00:34:31.996 } 00:34:31.996 ] 00:34:31.996 }' 00:34:31.996 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:31.996 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.255 [2024-11-26 17:31:32.816114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:32.255 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.514 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:32.514 "name": "Existed_Raid", 00:34:32.514 "uuid": "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e", 00:34:32.514 "strip_size_kb": 64, 00:34:32.514 "state": "configuring", 00:34:32.514 "raid_level": "concat", 00:34:32.514 "superblock": true, 00:34:32.514 "num_base_bdevs": 3, 00:34:32.514 "num_base_bdevs_discovered": 1, 00:34:32.514 "num_base_bdevs_operational": 3, 00:34:32.514 "base_bdevs_list": [ 00:34:32.514 { 00:34:32.514 "name": null, 00:34:32.514 "uuid": "fe98c76d-826c-489a-9bbb-b952adf68e0d", 00:34:32.514 "is_configured": false, 00:34:32.514 "data_offset": 0, 00:34:32.514 "data_size": 63488 00:34:32.514 }, 00:34:32.514 { 00:34:32.514 "name": null, 00:34:32.514 "uuid": "24c060ba-2a26-464d-9ecf-d22d9b19be2f", 00:34:32.514 "is_configured": false, 00:34:32.514 "data_offset": 0, 00:34:32.514 "data_size": 63488 00:34:32.514 }, 00:34:32.514 { 00:34:32.514 "name": "BaseBdev3", 00:34:32.514 "uuid": "fa45b043-aa46-455d-8149-205df16677dc", 00:34:32.514 "is_configured": true, 00:34:32.514 "data_offset": 2048, 00:34:32.514 "data_size": 63488 00:34:32.514 } 00:34:32.514 ] 00:34:32.514 }' 00:34:32.514 17:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:32.514 17:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.772 [2024-11-26 17:31:33.398652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:32.772 "name": "Existed_Raid", 00:34:32.772 "uuid": "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e", 00:34:32.772 "strip_size_kb": 64, 00:34:32.772 "state": "configuring", 00:34:32.772 "raid_level": "concat", 00:34:32.772 "superblock": true, 00:34:32.772 "num_base_bdevs": 3, 00:34:32.772 "num_base_bdevs_discovered": 2, 00:34:32.772 "num_base_bdevs_operational": 3, 00:34:32.772 "base_bdevs_list": [ 00:34:32.772 { 00:34:32.772 "name": null, 00:34:32.772 "uuid": "fe98c76d-826c-489a-9bbb-b952adf68e0d", 00:34:32.772 "is_configured": false, 00:34:32.772 "data_offset": 0, 00:34:32.772 "data_size": 63488 00:34:32.772 }, 00:34:32.772 { 00:34:32.772 "name": "BaseBdev2", 00:34:32.772 "uuid": "24c060ba-2a26-464d-9ecf-d22d9b19be2f", 00:34:32.772 "is_configured": true, 00:34:32.772 "data_offset": 2048, 00:34:32.772 "data_size": 63488 00:34:32.772 }, 00:34:32.772 { 00:34:32.772 "name": "BaseBdev3", 00:34:32.772 "uuid": "fa45b043-aa46-455d-8149-205df16677dc", 00:34:32.772 "is_configured": true, 00:34:32.772 "data_offset": 2048, 00:34:32.772 "data_size": 63488 00:34:32.772 } 00:34:32.772 ] 00:34:32.772 }' 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:32.772 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fe98c76d-826c-489a-9bbb-b952adf68e0d 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.340 [2024-11-26 17:31:33.932165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:33.340 [2024-11-26 17:31:33.932447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:33.340 [2024-11-26 17:31:33.932465] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:33.340 NewBaseBdev 00:34:33.340 [2024-11-26 17:31:33.932807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:33.340 [2024-11-26 17:31:33.933022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:33.340 [2024-11-26 17:31:33.933034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:34:33.340 [2024-11-26 17:31:33.933200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.340 [ 00:34:33.340 { 00:34:33.340 "name": "NewBaseBdev", 00:34:33.340 "aliases": [ 00:34:33.340 "fe98c76d-826c-489a-9bbb-b952adf68e0d" 00:34:33.340 ], 00:34:33.340 "product_name": "Malloc disk", 00:34:33.340 "block_size": 512, 00:34:33.340 "num_blocks": 65536, 00:34:33.340 "uuid": "fe98c76d-826c-489a-9bbb-b952adf68e0d", 00:34:33.340 "assigned_rate_limits": { 00:34:33.340 "rw_ios_per_sec": 0, 00:34:33.340 "rw_mbytes_per_sec": 0, 00:34:33.340 "r_mbytes_per_sec": 0, 00:34:33.340 "w_mbytes_per_sec": 0 00:34:33.340 }, 00:34:33.340 "claimed": true, 00:34:33.340 "claim_type": "exclusive_write", 00:34:33.340 "zoned": false, 00:34:33.340 "supported_io_types": { 00:34:33.340 "read": true, 00:34:33.340 "write": true, 00:34:33.340 "unmap": true, 00:34:33.340 "flush": true, 00:34:33.340 "reset": true, 00:34:33.340 "nvme_admin": false, 00:34:33.340 "nvme_io": false, 00:34:33.340 "nvme_io_md": false, 00:34:33.340 "write_zeroes": true, 00:34:33.340 "zcopy": true, 00:34:33.340 "get_zone_info": false, 00:34:33.340 "zone_management": false, 00:34:33.340 "zone_append": false, 00:34:33.340 "compare": false, 00:34:33.340 "compare_and_write": false, 00:34:33.340 "abort": true, 00:34:33.340 "seek_hole": false, 00:34:33.340 "seek_data": false, 00:34:33.340 "copy": true, 00:34:33.340 "nvme_iov_md": false 00:34:33.340 }, 00:34:33.340 "memory_domains": [ 00:34:33.340 { 00:34:33.340 "dma_device_id": "system", 00:34:33.340 "dma_device_type": 1 00:34:33.340 }, 00:34:33.340 { 00:34:33.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:33.340 "dma_device_type": 2 00:34:33.340 } 00:34:33.340 ], 00:34:33.340 "driver_specific": {} 00:34:33.340 } 00:34:33.340 ] 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.340 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.341 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.341 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.341 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:33.341 "name": "Existed_Raid", 00:34:33.341 "uuid": "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e", 00:34:33.341 "strip_size_kb": 64, 00:34:33.341 "state": "online", 00:34:33.341 "raid_level": "concat", 00:34:33.341 "superblock": true, 00:34:33.341 "num_base_bdevs": 3, 00:34:33.341 "num_base_bdevs_discovered": 3, 00:34:33.341 "num_base_bdevs_operational": 3, 00:34:33.341 "base_bdevs_list": [ 00:34:33.341 { 00:34:33.341 "name": "NewBaseBdev", 00:34:33.341 "uuid": "fe98c76d-826c-489a-9bbb-b952adf68e0d", 00:34:33.341 "is_configured": true, 00:34:33.341 "data_offset": 2048, 00:34:33.341 "data_size": 63488 00:34:33.341 }, 00:34:33.341 { 00:34:33.341 "name": "BaseBdev2", 00:34:33.341 "uuid": "24c060ba-2a26-464d-9ecf-d22d9b19be2f", 00:34:33.341 "is_configured": true, 00:34:33.341 "data_offset": 2048, 00:34:33.341 "data_size": 63488 00:34:33.341 }, 00:34:33.341 { 00:34:33.341 "name": "BaseBdev3", 00:34:33.341 "uuid": "fa45b043-aa46-455d-8149-205df16677dc", 00:34:33.341 "is_configured": true, 00:34:33.341 "data_offset": 2048, 00:34:33.341 "data_size": 63488 00:34:33.341 } 00:34:33.341 ] 00:34:33.341 }' 00:34:33.341 17:31:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:33.341 17:31:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.910 [2024-11-26 17:31:34.379920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.910 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:33.910 "name": "Existed_Raid", 00:34:33.910 "aliases": [ 00:34:33.910 "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e" 00:34:33.910 ], 00:34:33.910 "product_name": "Raid Volume", 00:34:33.910 "block_size": 512, 00:34:33.910 "num_blocks": 190464, 00:34:33.910 "uuid": "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e", 00:34:33.910 "assigned_rate_limits": { 00:34:33.910 "rw_ios_per_sec": 0, 00:34:33.910 "rw_mbytes_per_sec": 0, 00:34:33.910 "r_mbytes_per_sec": 0, 00:34:33.910 "w_mbytes_per_sec": 0 00:34:33.910 }, 00:34:33.910 "claimed": false, 00:34:33.910 "zoned": false, 00:34:33.910 "supported_io_types": { 00:34:33.910 "read": true, 00:34:33.910 "write": true, 00:34:33.910 "unmap": true, 00:34:33.910 "flush": true, 00:34:33.910 "reset": true, 00:34:33.910 "nvme_admin": false, 00:34:33.910 "nvme_io": false, 00:34:33.910 "nvme_io_md": false, 00:34:33.910 "write_zeroes": true, 00:34:33.910 "zcopy": false, 00:34:33.910 "get_zone_info": false, 00:34:33.910 "zone_management": false, 00:34:33.910 "zone_append": false, 00:34:33.910 "compare": false, 00:34:33.910 "compare_and_write": false, 00:34:33.911 "abort": false, 00:34:33.911 "seek_hole": false, 00:34:33.911 "seek_data": false, 00:34:33.911 "copy": false, 00:34:33.911 "nvme_iov_md": false 00:34:33.911 }, 00:34:33.911 "memory_domains": [ 00:34:33.911 { 00:34:33.911 "dma_device_id": "system", 00:34:33.911 "dma_device_type": 1 00:34:33.911 }, 00:34:33.911 { 00:34:33.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:33.911 "dma_device_type": 2 00:34:33.911 }, 00:34:33.911 { 00:34:33.911 "dma_device_id": "system", 00:34:33.911 "dma_device_type": 1 00:34:33.911 }, 00:34:33.911 { 00:34:33.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:33.911 "dma_device_type": 2 00:34:33.911 }, 00:34:33.911 { 00:34:33.911 "dma_device_id": "system", 00:34:33.911 "dma_device_type": 1 00:34:33.911 }, 00:34:33.911 { 00:34:33.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:33.911 "dma_device_type": 2 00:34:33.911 } 00:34:33.911 ], 00:34:33.911 "driver_specific": { 00:34:33.911 "raid": { 00:34:33.911 "uuid": "78a5e9e4-c7c7-4dfb-abb1-fe08ebc8d10e", 00:34:33.911 "strip_size_kb": 64, 00:34:33.911 "state": "online", 00:34:33.911 "raid_level": "concat", 00:34:33.911 "superblock": true, 00:34:33.911 "num_base_bdevs": 3, 00:34:33.911 "num_base_bdevs_discovered": 3, 00:34:33.911 "num_base_bdevs_operational": 3, 00:34:33.911 "base_bdevs_list": [ 00:34:33.911 { 00:34:33.911 "name": "NewBaseBdev", 00:34:33.911 "uuid": "fe98c76d-826c-489a-9bbb-b952adf68e0d", 00:34:33.911 "is_configured": true, 00:34:33.911 "data_offset": 2048, 00:34:33.911 "data_size": 63488 00:34:33.911 }, 00:34:33.911 { 00:34:33.911 "name": "BaseBdev2", 00:34:33.911 "uuid": "24c060ba-2a26-464d-9ecf-d22d9b19be2f", 00:34:33.911 "is_configured": true, 00:34:33.911 "data_offset": 2048, 00:34:33.911 "data_size": 63488 00:34:33.911 }, 00:34:33.911 { 00:34:33.911 "name": "BaseBdev3", 00:34:33.911 "uuid": "fa45b043-aa46-455d-8149-205df16677dc", 00:34:33.911 "is_configured": true, 00:34:33.911 "data_offset": 2048, 00:34:33.911 "data_size": 63488 00:34:33.911 } 00:34:33.911 ] 00:34:33.911 } 00:34:33.911 } 00:34:33.911 }' 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:34:33.911 BaseBdev2 00:34:33.911 BaseBdev3' 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.911 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.171 [2024-11-26 17:31:34.671065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:34.171 [2024-11-26 17:31:34.671150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:34.171 [2024-11-26 17:31:34.671274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:34.171 [2024-11-26 17:31:34.671364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:34.171 [2024-11-26 17:31:34.671432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66471 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66471 ']' 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66471 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66471 00:34:34.171 killing process with pid 66471 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66471' 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66471 00:34:34.171 17:31:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66471 00:34:34.171 [2024-11-26 17:31:34.710395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:34.431 [2024-11-26 17:31:35.064198] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:35.809 17:31:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:34:35.809 00:34:35.809 real 0m10.442s 00:34:35.809 user 0m16.440s 00:34:35.809 sys 0m1.676s 00:34:35.809 17:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.809 ************************************ 00:34:35.809 END TEST raid_state_function_test_sb 00:34:35.809 ************************************ 00:34:35.809 17:31:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.809 17:31:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:34:35.809 17:31:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:35.809 17:31:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.809 17:31:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:35.809 ************************************ 00:34:35.809 START TEST raid_superblock_test 00:34:35.809 ************************************ 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67091 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67091 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67091 ']' 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.809 17:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:35.809 [2024-11-26 17:31:36.417192] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:35.809 [2024-11-26 17:31:36.417443] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67091 ] 00:34:36.069 [2024-11-26 17:31:36.586294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:36.069 [2024-11-26 17:31:36.722628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.329 [2024-11-26 17:31:36.957739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:36.329 [2024-11-26 17:31:36.957810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:36.900 malloc1 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:36.900 [2024-11-26 17:31:37.361747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:36.900 [2024-11-26 17:31:37.361827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:36.900 [2024-11-26 17:31:37.361853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:36.900 [2024-11-26 17:31:37.361865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:36.900 [2024-11-26 17:31:37.364427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:36.900 [2024-11-26 17:31:37.364476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:36.900 pt1 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:36.900 malloc2 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:36.900 [2024-11-26 17:31:37.425230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:36.900 [2024-11-26 17:31:37.425412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:36.900 [2024-11-26 17:31:37.425470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:36.900 [2024-11-26 17:31:37.425538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:36.900 [2024-11-26 17:31:37.428079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:36.900 [2024-11-26 17:31:37.428182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:36.900 pt2 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:36.900 malloc3 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:36.900 [2024-11-26 17:31:37.499295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:36.900 [2024-11-26 17:31:37.499437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:36.900 [2024-11-26 17:31:37.499488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:36.900 [2024-11-26 17:31:37.499563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:36.900 [2024-11-26 17:31:37.502149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:36.900 [2024-11-26 17:31:37.502252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:36.900 pt3 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:36.900 [2024-11-26 17:31:37.511395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:36.900 [2024-11-26 17:31:37.513636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:36.900 [2024-11-26 17:31:37.513772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:36.900 [2024-11-26 17:31:37.513999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:36.900 [2024-11-26 17:31:37.514016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:36.900 [2024-11-26 17:31:37.514335] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:36.900 [2024-11-26 17:31:37.514586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:36.900 [2024-11-26 17:31:37.514600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:36.900 [2024-11-26 17:31:37.514819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.900 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:36.901 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.901 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:36.901 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.901 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:36.901 "name": "raid_bdev1", 00:34:36.901 "uuid": "879796f9-e4df-4d68-a1ff-854c92f93fa0", 00:34:36.901 "strip_size_kb": 64, 00:34:36.901 "state": "online", 00:34:36.901 "raid_level": "concat", 00:34:36.901 "superblock": true, 00:34:36.901 "num_base_bdevs": 3, 00:34:36.901 "num_base_bdevs_discovered": 3, 00:34:36.901 "num_base_bdevs_operational": 3, 00:34:36.901 "base_bdevs_list": [ 00:34:36.901 { 00:34:36.901 "name": "pt1", 00:34:36.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:36.901 "is_configured": true, 00:34:36.901 "data_offset": 2048, 00:34:36.901 "data_size": 63488 00:34:36.901 }, 00:34:36.901 { 00:34:36.901 "name": "pt2", 00:34:36.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:36.901 "is_configured": true, 00:34:36.901 "data_offset": 2048, 00:34:36.901 "data_size": 63488 00:34:36.901 }, 00:34:36.901 { 00:34:36.901 "name": "pt3", 00:34:36.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:36.901 "is_configured": true, 00:34:36.901 "data_offset": 2048, 00:34:36.901 "data_size": 63488 00:34:36.901 } 00:34:36.901 ] 00:34:36.901 }' 00:34:36.901 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:36.901 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.468 [2024-11-26 17:31:37.926972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.468 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:37.468 "name": "raid_bdev1", 00:34:37.468 "aliases": [ 00:34:37.468 "879796f9-e4df-4d68-a1ff-854c92f93fa0" 00:34:37.468 ], 00:34:37.468 "product_name": "Raid Volume", 00:34:37.468 "block_size": 512, 00:34:37.468 "num_blocks": 190464, 00:34:37.468 "uuid": "879796f9-e4df-4d68-a1ff-854c92f93fa0", 00:34:37.468 "assigned_rate_limits": { 00:34:37.468 "rw_ios_per_sec": 0, 00:34:37.468 "rw_mbytes_per_sec": 0, 00:34:37.468 "r_mbytes_per_sec": 0, 00:34:37.468 "w_mbytes_per_sec": 0 00:34:37.468 }, 00:34:37.468 "claimed": false, 00:34:37.468 "zoned": false, 00:34:37.468 "supported_io_types": { 00:34:37.468 "read": true, 00:34:37.468 "write": true, 00:34:37.468 "unmap": true, 00:34:37.468 "flush": true, 00:34:37.468 "reset": true, 00:34:37.469 "nvme_admin": false, 00:34:37.469 "nvme_io": false, 00:34:37.469 "nvme_io_md": false, 00:34:37.469 "write_zeroes": true, 00:34:37.469 "zcopy": false, 00:34:37.469 "get_zone_info": false, 00:34:37.469 "zone_management": false, 00:34:37.469 "zone_append": false, 00:34:37.469 "compare": false, 00:34:37.469 "compare_and_write": false, 00:34:37.469 "abort": false, 00:34:37.469 "seek_hole": false, 00:34:37.469 "seek_data": false, 00:34:37.469 "copy": false, 00:34:37.469 "nvme_iov_md": false 00:34:37.469 }, 00:34:37.469 "memory_domains": [ 00:34:37.469 { 00:34:37.469 "dma_device_id": "system", 00:34:37.469 "dma_device_type": 1 00:34:37.469 }, 00:34:37.469 { 00:34:37.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:37.469 "dma_device_type": 2 00:34:37.469 }, 00:34:37.469 { 00:34:37.469 "dma_device_id": "system", 00:34:37.469 "dma_device_type": 1 00:34:37.469 }, 00:34:37.469 { 00:34:37.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:37.469 "dma_device_type": 2 00:34:37.469 }, 00:34:37.469 { 00:34:37.469 "dma_device_id": "system", 00:34:37.469 "dma_device_type": 1 00:34:37.469 }, 00:34:37.469 { 00:34:37.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:37.469 "dma_device_type": 2 00:34:37.469 } 00:34:37.469 ], 00:34:37.469 "driver_specific": { 00:34:37.469 "raid": { 00:34:37.469 "uuid": "879796f9-e4df-4d68-a1ff-854c92f93fa0", 00:34:37.469 "strip_size_kb": 64, 00:34:37.469 "state": "online", 00:34:37.469 "raid_level": "concat", 00:34:37.469 "superblock": true, 00:34:37.469 "num_base_bdevs": 3, 00:34:37.469 "num_base_bdevs_discovered": 3, 00:34:37.469 "num_base_bdevs_operational": 3, 00:34:37.469 "base_bdevs_list": [ 00:34:37.469 { 00:34:37.469 "name": "pt1", 00:34:37.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:37.469 "is_configured": true, 00:34:37.469 "data_offset": 2048, 00:34:37.469 "data_size": 63488 00:34:37.469 }, 00:34:37.469 { 00:34:37.469 "name": "pt2", 00:34:37.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:37.469 "is_configured": true, 00:34:37.469 "data_offset": 2048, 00:34:37.469 "data_size": 63488 00:34:37.469 }, 00:34:37.469 { 00:34:37.469 "name": "pt3", 00:34:37.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:37.469 "is_configured": true, 00:34:37.469 "data_offset": 2048, 00:34:37.469 "data_size": 63488 00:34:37.469 } 00:34:37.469 ] 00:34:37.469 } 00:34:37.469 } 00:34:37.469 }' 00:34:37.469 17:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:37.469 pt2 00:34:37.469 pt3' 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:37.469 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.728 [2024-11-26 17:31:38.202474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=879796f9-e4df-4d68-a1ff-854c92f93fa0 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 879796f9-e4df-4d68-a1ff-854c92f93fa0 ']' 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.728 [2024-11-26 17:31:38.230103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:37.728 [2024-11-26 17:31:38.230140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:37.728 [2024-11-26 17:31:38.230235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:37.728 [2024-11-26 17:31:38.230305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:37.728 [2024-11-26 17:31:38.230317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.728 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.729 [2024-11-26 17:31:38.369932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:37.729 [2024-11-26 17:31:38.372032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:37.729 [2024-11-26 17:31:38.372097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:37.729 [2024-11-26 17:31:38.372155] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:37.729 [2024-11-26 17:31:38.372217] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:37.729 [2024-11-26 17:31:38.372241] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:34:37.729 [2024-11-26 17:31:38.372261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:37.729 [2024-11-26 17:31:38.372272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:34:37.729 request: 00:34:37.729 { 00:34:37.729 "name": "raid_bdev1", 00:34:37.729 "raid_level": "concat", 00:34:37.729 "base_bdevs": [ 00:34:37.729 "malloc1", 00:34:37.729 "malloc2", 00:34:37.729 "malloc3" 00:34:37.729 ], 00:34:37.729 "strip_size_kb": 64, 00:34:37.729 "superblock": false, 00:34:37.729 "method": "bdev_raid_create", 00:34:37.729 "req_id": 1 00:34:37.729 } 00:34:37.729 Got JSON-RPC error response 00:34:37.729 response: 00:34:37.729 { 00:34:37.729 "code": -17, 00:34:37.729 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:37.729 } 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.729 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.987 [2024-11-26 17:31:38.441762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:37.987 [2024-11-26 17:31:38.441925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:37.987 [2024-11-26 17:31:38.441984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:34:37.987 [2024-11-26 17:31:38.442023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:37.987 [2024-11-26 17:31:38.444624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:37.987 [2024-11-26 17:31:38.444709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:37.987 [2024-11-26 17:31:38.444832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:37.987 [2024-11-26 17:31:38.444924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:37.987 pt1 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:37.987 "name": "raid_bdev1", 00:34:37.987 "uuid": "879796f9-e4df-4d68-a1ff-854c92f93fa0", 00:34:37.987 "strip_size_kb": 64, 00:34:37.987 "state": "configuring", 00:34:37.987 "raid_level": "concat", 00:34:37.987 "superblock": true, 00:34:37.987 "num_base_bdevs": 3, 00:34:37.987 "num_base_bdevs_discovered": 1, 00:34:37.987 "num_base_bdevs_operational": 3, 00:34:37.987 "base_bdevs_list": [ 00:34:37.987 { 00:34:37.987 "name": "pt1", 00:34:37.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:37.987 "is_configured": true, 00:34:37.987 "data_offset": 2048, 00:34:37.987 "data_size": 63488 00:34:37.987 }, 00:34:37.987 { 00:34:37.987 "name": null, 00:34:37.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:37.987 "is_configured": false, 00:34:37.987 "data_offset": 2048, 00:34:37.987 "data_size": 63488 00:34:37.987 }, 00:34:37.987 { 00:34:37.987 "name": null, 00:34:37.987 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:37.987 "is_configured": false, 00:34:37.987 "data_offset": 2048, 00:34:37.987 "data_size": 63488 00:34:37.987 } 00:34:37.987 ] 00:34:37.987 }' 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:37.987 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.246 [2024-11-26 17:31:38.821108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:38.246 [2024-11-26 17:31:38.821288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.246 [2024-11-26 17:31:38.821325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:34:38.246 [2024-11-26 17:31:38.821336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.246 [2024-11-26 17:31:38.821844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.246 [2024-11-26 17:31:38.821865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:38.246 [2024-11-26 17:31:38.821960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:38.246 [2024-11-26 17:31:38.821993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:38.246 pt2 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.246 [2024-11-26 17:31:38.833109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:38.246 "name": "raid_bdev1", 00:34:38.246 "uuid": "879796f9-e4df-4d68-a1ff-854c92f93fa0", 00:34:38.246 "strip_size_kb": 64, 00:34:38.246 "state": "configuring", 00:34:38.246 "raid_level": "concat", 00:34:38.246 "superblock": true, 00:34:38.246 "num_base_bdevs": 3, 00:34:38.246 "num_base_bdevs_discovered": 1, 00:34:38.246 "num_base_bdevs_operational": 3, 00:34:38.246 "base_bdevs_list": [ 00:34:38.246 { 00:34:38.246 "name": "pt1", 00:34:38.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:38.246 "is_configured": true, 00:34:38.246 "data_offset": 2048, 00:34:38.246 "data_size": 63488 00:34:38.246 }, 00:34:38.246 { 00:34:38.246 "name": null, 00:34:38.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:38.246 "is_configured": false, 00:34:38.246 "data_offset": 0, 00:34:38.246 "data_size": 63488 00:34:38.246 }, 00:34:38.246 { 00:34:38.246 "name": null, 00:34:38.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:38.246 "is_configured": false, 00:34:38.246 "data_offset": 2048, 00:34:38.246 "data_size": 63488 00:34:38.246 } 00:34:38.246 ] 00:34:38.246 }' 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:38.246 17:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.814 [2024-11-26 17:31:39.332270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:38.814 [2024-11-26 17:31:39.332422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.814 [2024-11-26 17:31:39.332473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:34:38.814 [2024-11-26 17:31:39.332523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.814 [2024-11-26 17:31:39.333088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.814 [2024-11-26 17:31:39.333159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:38.814 [2024-11-26 17:31:39.333312] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:38.814 [2024-11-26 17:31:39.333373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:38.814 pt2 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.814 [2024-11-26 17:31:39.344235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:38.814 [2024-11-26 17:31:39.344344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.814 [2024-11-26 17:31:39.344390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:38.814 [2024-11-26 17:31:39.344427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.814 [2024-11-26 17:31:39.344986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.814 [2024-11-26 17:31:39.345061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:38.814 [2024-11-26 17:31:39.345177] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:38.814 [2024-11-26 17:31:39.345238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:38.814 [2024-11-26 17:31:39.345426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:38.814 [2024-11-26 17:31:39.345473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:38.814 [2024-11-26 17:31:39.345809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:38.814 [2024-11-26 17:31:39.346028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:38.814 [2024-11-26 17:31:39.346042] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:38.814 [2024-11-26 17:31:39.346210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:38.814 pt3 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:38.814 "name": "raid_bdev1", 00:34:38.814 "uuid": "879796f9-e4df-4d68-a1ff-854c92f93fa0", 00:34:38.814 "strip_size_kb": 64, 00:34:38.814 "state": "online", 00:34:38.814 "raid_level": "concat", 00:34:38.814 "superblock": true, 00:34:38.814 "num_base_bdevs": 3, 00:34:38.814 "num_base_bdevs_discovered": 3, 00:34:38.814 "num_base_bdevs_operational": 3, 00:34:38.814 "base_bdevs_list": [ 00:34:38.814 { 00:34:38.814 "name": "pt1", 00:34:38.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:38.814 "is_configured": true, 00:34:38.814 "data_offset": 2048, 00:34:38.814 "data_size": 63488 00:34:38.814 }, 00:34:38.814 { 00:34:38.814 "name": "pt2", 00:34:38.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:38.814 "is_configured": true, 00:34:38.814 "data_offset": 2048, 00:34:38.814 "data_size": 63488 00:34:38.814 }, 00:34:38.814 { 00:34:38.814 "name": "pt3", 00:34:38.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:38.814 "is_configured": true, 00:34:38.814 "data_offset": 2048, 00:34:38.814 "data_size": 63488 00:34:38.814 } 00:34:38.814 ] 00:34:38.814 }' 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:38.814 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:39.382 [2024-11-26 17:31:39.800059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.382 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:39.382 "name": "raid_bdev1", 00:34:39.382 "aliases": [ 00:34:39.382 "879796f9-e4df-4d68-a1ff-854c92f93fa0" 00:34:39.382 ], 00:34:39.382 "product_name": "Raid Volume", 00:34:39.382 "block_size": 512, 00:34:39.382 "num_blocks": 190464, 00:34:39.382 "uuid": "879796f9-e4df-4d68-a1ff-854c92f93fa0", 00:34:39.382 "assigned_rate_limits": { 00:34:39.382 "rw_ios_per_sec": 0, 00:34:39.382 "rw_mbytes_per_sec": 0, 00:34:39.382 "r_mbytes_per_sec": 0, 00:34:39.382 "w_mbytes_per_sec": 0 00:34:39.382 }, 00:34:39.382 "claimed": false, 00:34:39.382 "zoned": false, 00:34:39.382 "supported_io_types": { 00:34:39.382 "read": true, 00:34:39.382 "write": true, 00:34:39.382 "unmap": true, 00:34:39.382 "flush": true, 00:34:39.382 "reset": true, 00:34:39.382 "nvme_admin": false, 00:34:39.382 "nvme_io": false, 00:34:39.382 "nvme_io_md": false, 00:34:39.382 "write_zeroes": true, 00:34:39.382 "zcopy": false, 00:34:39.382 "get_zone_info": false, 00:34:39.382 "zone_management": false, 00:34:39.382 "zone_append": false, 00:34:39.382 "compare": false, 00:34:39.382 "compare_and_write": false, 00:34:39.382 "abort": false, 00:34:39.382 "seek_hole": false, 00:34:39.382 "seek_data": false, 00:34:39.382 "copy": false, 00:34:39.382 "nvme_iov_md": false 00:34:39.382 }, 00:34:39.382 "memory_domains": [ 00:34:39.382 { 00:34:39.382 "dma_device_id": "system", 00:34:39.382 "dma_device_type": 1 00:34:39.382 }, 00:34:39.382 { 00:34:39.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:39.382 "dma_device_type": 2 00:34:39.382 }, 00:34:39.382 { 00:34:39.382 "dma_device_id": "system", 00:34:39.382 "dma_device_type": 1 00:34:39.382 }, 00:34:39.382 { 00:34:39.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:39.382 "dma_device_type": 2 00:34:39.382 }, 00:34:39.382 { 00:34:39.382 "dma_device_id": "system", 00:34:39.382 "dma_device_type": 1 00:34:39.382 }, 00:34:39.382 { 00:34:39.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:39.382 "dma_device_type": 2 00:34:39.382 } 00:34:39.382 ], 00:34:39.382 "driver_specific": { 00:34:39.382 "raid": { 00:34:39.382 "uuid": "879796f9-e4df-4d68-a1ff-854c92f93fa0", 00:34:39.382 "strip_size_kb": 64, 00:34:39.382 "state": "online", 00:34:39.383 "raid_level": "concat", 00:34:39.383 "superblock": true, 00:34:39.383 "num_base_bdevs": 3, 00:34:39.383 "num_base_bdevs_discovered": 3, 00:34:39.383 "num_base_bdevs_operational": 3, 00:34:39.383 "base_bdevs_list": [ 00:34:39.383 { 00:34:39.383 "name": "pt1", 00:34:39.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:39.383 "is_configured": true, 00:34:39.383 "data_offset": 2048, 00:34:39.383 "data_size": 63488 00:34:39.383 }, 00:34:39.383 { 00:34:39.383 "name": "pt2", 00:34:39.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:39.383 "is_configured": true, 00:34:39.383 "data_offset": 2048, 00:34:39.383 "data_size": 63488 00:34:39.383 }, 00:34:39.383 { 00:34:39.383 "name": "pt3", 00:34:39.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:39.383 "is_configured": true, 00:34:39.383 "data_offset": 2048, 00:34:39.383 "data_size": 63488 00:34:39.383 } 00:34:39.383 ] 00:34:39.383 } 00:34:39.383 } 00:34:39.383 }' 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:39.383 pt2 00:34:39.383 pt3' 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:39.383 17:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.383 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:39.383 [2024-11-26 17:31:40.071541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 879796f9-e4df-4d68-a1ff-854c92f93fa0 '!=' 879796f9-e4df-4d68-a1ff-854c92f93fa0 ']' 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67091 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67091 ']' 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67091 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67091 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67091' 00:34:39.642 killing process with pid 67091 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67091 00:34:39.642 [2024-11-26 17:31:40.142139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:39.642 17:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67091 00:34:39.642 [2024-11-26 17:31:40.142353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:39.642 [2024-11-26 17:31:40.142424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:39.642 [2024-11-26 17:31:40.142438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:39.901 [2024-11-26 17:31:40.495281] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:41.299 17:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:34:41.299 00:34:41.299 real 0m5.391s 00:34:41.299 user 0m7.682s 00:34:41.299 sys 0m0.881s 00:34:41.299 17:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:41.299 17:31:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.299 ************************************ 00:34:41.299 END TEST raid_superblock_test 00:34:41.299 ************************************ 00:34:41.299 17:31:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:34:41.299 17:31:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:41.299 17:31:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.299 17:31:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:41.299 ************************************ 00:34:41.299 START TEST raid_read_error_test 00:34:41.299 ************************************ 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:34:41.299 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TznlYBD1bB 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67344 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67344 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67344 ']' 00:34:41.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.300 17:31:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.300 [2024-11-26 17:31:41.867842] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:41.300 [2024-11-26 17:31:41.868605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67344 ] 00:34:41.558 [2024-11-26 17:31:42.048348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.558 [2024-11-26 17:31:42.185387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.818 [2024-11-26 17:31:42.432541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:41.818 [2024-11-26 17:31:42.432630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:42.079 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.079 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:34:42.079 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:42.079 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:42.079 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.079 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.340 BaseBdev1_malloc 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.340 true 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.340 [2024-11-26 17:31:42.825347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:34:42.340 [2024-11-26 17:31:42.825442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:42.340 [2024-11-26 17:31:42.825470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:42.340 [2024-11-26 17:31:42.825484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:42.340 [2024-11-26 17:31:42.828023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:42.340 [2024-11-26 17:31:42.828143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:42.340 BaseBdev1 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.340 BaseBdev2_malloc 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.340 true 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.340 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.341 [2024-11-26 17:31:42.895563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:34:42.341 [2024-11-26 17:31:42.895632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:42.341 [2024-11-26 17:31:42.895654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:42.341 [2024-11-26 17:31:42.895668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:42.341 [2024-11-26 17:31:42.898067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:42.341 [2024-11-26 17:31:42.898121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:42.341 BaseBdev2 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.341 BaseBdev3_malloc 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.341 true 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.341 [2024-11-26 17:31:42.980514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:34:42.341 [2024-11-26 17:31:42.980601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:42.341 [2024-11-26 17:31:42.980627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:42.341 [2024-11-26 17:31:42.980643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:42.341 [2024-11-26 17:31:42.983163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:42.341 [2024-11-26 17:31:42.983272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:42.341 BaseBdev3 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.341 [2024-11-26 17:31:42.992599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:42.341 [2024-11-26 17:31:42.994754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:42.341 [2024-11-26 17:31:42.994850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:42.341 [2024-11-26 17:31:42.995095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:42.341 [2024-11-26 17:31:42.995112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:42.341 [2024-11-26 17:31:42.995431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:34:42.341 [2024-11-26 17:31:42.995646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:42.341 [2024-11-26 17:31:42.995665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:42.341 [2024-11-26 17:31:42.995899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:42.341 17:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:42.341 17:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:42.341 17:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:42.341 17:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:42.341 17:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:42.341 17:31:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.341 17:31:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.341 17:31:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.602 17:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:42.602 "name": "raid_bdev1", 00:34:42.602 "uuid": "778cb525-f052-40c9-9416-9c7450c37053", 00:34:42.602 "strip_size_kb": 64, 00:34:42.602 "state": "online", 00:34:42.602 "raid_level": "concat", 00:34:42.602 "superblock": true, 00:34:42.602 "num_base_bdevs": 3, 00:34:42.602 "num_base_bdevs_discovered": 3, 00:34:42.602 "num_base_bdevs_operational": 3, 00:34:42.602 "base_bdevs_list": [ 00:34:42.602 { 00:34:42.602 "name": "BaseBdev1", 00:34:42.602 "uuid": "7d21b376-068e-59e3-9773-f8316b2e0513", 00:34:42.602 "is_configured": true, 00:34:42.602 "data_offset": 2048, 00:34:42.602 "data_size": 63488 00:34:42.602 }, 00:34:42.602 { 00:34:42.602 "name": "BaseBdev2", 00:34:42.602 "uuid": "db851f0e-6e10-5124-9e29-11102389a713", 00:34:42.602 "is_configured": true, 00:34:42.602 "data_offset": 2048, 00:34:42.602 "data_size": 63488 00:34:42.602 }, 00:34:42.602 { 00:34:42.602 "name": "BaseBdev3", 00:34:42.602 "uuid": "00e7d640-5dad-5dc9-b06f-eae8142db281", 00:34:42.602 "is_configured": true, 00:34:42.602 "data_offset": 2048, 00:34:42.602 "data_size": 63488 00:34:42.602 } 00:34:42.602 ] 00:34:42.602 }' 00:34:42.602 17:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:42.602 17:31:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:42.861 17:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:34:42.862 17:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:43.121 [2024-11-26 17:31:43.601330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:44.061 "name": "raid_bdev1", 00:34:44.061 "uuid": "778cb525-f052-40c9-9416-9c7450c37053", 00:34:44.061 "strip_size_kb": 64, 00:34:44.061 "state": "online", 00:34:44.061 "raid_level": "concat", 00:34:44.061 "superblock": true, 00:34:44.061 "num_base_bdevs": 3, 00:34:44.061 "num_base_bdevs_discovered": 3, 00:34:44.061 "num_base_bdevs_operational": 3, 00:34:44.061 "base_bdevs_list": [ 00:34:44.061 { 00:34:44.061 "name": "BaseBdev1", 00:34:44.061 "uuid": "7d21b376-068e-59e3-9773-f8316b2e0513", 00:34:44.061 "is_configured": true, 00:34:44.061 "data_offset": 2048, 00:34:44.061 "data_size": 63488 00:34:44.061 }, 00:34:44.061 { 00:34:44.061 "name": "BaseBdev2", 00:34:44.061 "uuid": "db851f0e-6e10-5124-9e29-11102389a713", 00:34:44.061 "is_configured": true, 00:34:44.061 "data_offset": 2048, 00:34:44.061 "data_size": 63488 00:34:44.061 }, 00:34:44.061 { 00:34:44.061 "name": "BaseBdev3", 00:34:44.061 "uuid": "00e7d640-5dad-5dc9-b06f-eae8142db281", 00:34:44.061 "is_configured": true, 00:34:44.061 "data_offset": 2048, 00:34:44.061 "data_size": 63488 00:34:44.061 } 00:34:44.061 ] 00:34:44.061 }' 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:44.061 17:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:44.632 [2024-11-26 17:31:45.027113] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:44.632 [2024-11-26 17:31:45.027215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:44.632 [2024-11-26 17:31:45.030599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:44.632 [2024-11-26 17:31:45.030711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:44.632 [2024-11-26 17:31:45.030797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:44.632 [2024-11-26 17:31:45.030859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:44.632 { 00:34:44.632 "results": [ 00:34:44.632 { 00:34:44.632 "job": "raid_bdev1", 00:34:44.632 "core_mask": "0x1", 00:34:44.632 "workload": "randrw", 00:34:44.632 "percentage": 50, 00:34:44.632 "status": "finished", 00:34:44.632 "queue_depth": 1, 00:34:44.632 "io_size": 131072, 00:34:44.632 "runtime": 1.426578, 00:34:44.632 "iops": 12995.4338283641, 00:34:44.632 "mibps": 1624.4292285455124, 00:34:44.632 "io_failed": 1, 00:34:44.632 "io_timeout": 0, 00:34:44.632 "avg_latency_us": 106.11972583767894, 00:34:44.632 "min_latency_us": 28.841921397379913, 00:34:44.632 "max_latency_us": 1752.8733624454148 00:34:44.632 } 00:34:44.632 ], 00:34:44.632 "core_count": 1 00:34:44.632 } 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67344 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67344 ']' 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67344 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67344 00:34:44.632 killing process with pid 67344 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67344' 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67344 00:34:44.632 [2024-11-26 17:31:45.077112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:44.632 17:31:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67344 00:34:44.892 [2024-11-26 17:31:45.355634] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:46.273 17:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TznlYBD1bB 00:34:46.273 17:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:34:46.273 17:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:34:46.273 ************************************ 00:34:46.273 END TEST raid_read_error_test 00:34:46.273 ************************************ 00:34:46.273 17:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:34:46.273 17:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:34:46.273 17:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:46.273 17:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:46.273 17:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:34:46.273 00:34:46.273 real 0m5.010s 00:34:46.273 user 0m6.003s 00:34:46.273 sys 0m0.602s 00:34:46.273 17:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:46.273 17:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:46.273 17:31:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:34:46.273 17:31:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:46.273 17:31:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:46.273 17:31:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:46.273 ************************************ 00:34:46.273 START TEST raid_write_error_test 00:34:46.273 ************************************ 00:34:46.273 17:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:34:46.273 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:34:46.273 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:34:46.273 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:34:46.273 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:34:46.273 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:46.273 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:34:46.273 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mLedYW7hLT 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67494 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67494 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67494 ']' 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.274 17:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:46.534 [2024-11-26 17:31:46.971834] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:46.534 [2024-11-26 17:31:46.971973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67494 ] 00:34:46.534 [2024-11-26 17:31:47.132875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.793 [2024-11-26 17:31:47.266671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.053 [2024-11-26 17:31:47.517909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:47.053 [2024-11-26 17:31:47.517998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.312 BaseBdev1_malloc 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.312 true 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.312 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.312 [2024-11-26 17:31:47.980411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:34:47.312 [2024-11-26 17:31:47.980560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:47.312 [2024-11-26 17:31:47.980593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:47.313 [2024-11-26 17:31:47.980609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:47.313 [2024-11-26 17:31:47.983234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:47.313 [2024-11-26 17:31:47.983284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:47.313 BaseBdev1 00:34:47.313 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.313 17:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:47.313 17:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:47.313 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.313 17:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.573 BaseBdev2_malloc 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.573 true 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.573 [2024-11-26 17:31:48.055340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:34:47.573 [2024-11-26 17:31:48.055404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:47.573 [2024-11-26 17:31:48.055423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:34:47.573 [2024-11-26 17:31:48.055435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:47.573 [2024-11-26 17:31:48.057898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:47.573 [2024-11-26 17:31:48.057942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:47.573 BaseBdev2 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.573 BaseBdev3_malloc 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.573 true 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.573 [2024-11-26 17:31:48.143527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:34:47.573 [2024-11-26 17:31:48.143591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:47.573 [2024-11-26 17:31:48.143613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:47.573 [2024-11-26 17:31:48.143625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:47.573 [2024-11-26 17:31:48.146148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:47.573 [2024-11-26 17:31:48.146193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:47.573 BaseBdev3 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.573 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.573 [2024-11-26 17:31:48.155603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:47.573 [2024-11-26 17:31:48.157775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:47.573 [2024-11-26 17:31:48.157939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:47.573 [2024-11-26 17:31:48.158205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:47.573 [2024-11-26 17:31:48.158222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:47.573 [2024-11-26 17:31:48.158613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:34:47.573 [2024-11-26 17:31:48.158855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:47.574 [2024-11-26 17:31:48.158884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:47.574 [2024-11-26 17:31:48.159088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:47.574 "name": "raid_bdev1", 00:34:47.574 "uuid": "f3b12abd-830d-4711-9997-58f868011863", 00:34:47.574 "strip_size_kb": 64, 00:34:47.574 "state": "online", 00:34:47.574 "raid_level": "concat", 00:34:47.574 "superblock": true, 00:34:47.574 "num_base_bdevs": 3, 00:34:47.574 "num_base_bdevs_discovered": 3, 00:34:47.574 "num_base_bdevs_operational": 3, 00:34:47.574 "base_bdevs_list": [ 00:34:47.574 { 00:34:47.574 "name": "BaseBdev1", 00:34:47.574 "uuid": "08494ab0-a6b8-5854-a24c-64a60a90d439", 00:34:47.574 "is_configured": true, 00:34:47.574 "data_offset": 2048, 00:34:47.574 "data_size": 63488 00:34:47.574 }, 00:34:47.574 { 00:34:47.574 "name": "BaseBdev2", 00:34:47.574 "uuid": "42405a11-a3aa-5085-bdcb-15e629a10809", 00:34:47.574 "is_configured": true, 00:34:47.574 "data_offset": 2048, 00:34:47.574 "data_size": 63488 00:34:47.574 }, 00:34:47.574 { 00:34:47.574 "name": "BaseBdev3", 00:34:47.574 "uuid": "ae8e434a-dfae-5686-80a3-65da6eab7457", 00:34:47.574 "is_configured": true, 00:34:47.574 "data_offset": 2048, 00:34:47.574 "data_size": 63488 00:34:47.574 } 00:34:47.574 ] 00:34:47.574 }' 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:47.574 17:31:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:48.143 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:48.143 17:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:34:48.143 [2024-11-26 17:31:48.740244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:49.080 "name": "raid_bdev1", 00:34:49.080 "uuid": "f3b12abd-830d-4711-9997-58f868011863", 00:34:49.080 "strip_size_kb": 64, 00:34:49.080 "state": "online", 00:34:49.080 "raid_level": "concat", 00:34:49.080 "superblock": true, 00:34:49.080 "num_base_bdevs": 3, 00:34:49.080 "num_base_bdevs_discovered": 3, 00:34:49.080 "num_base_bdevs_operational": 3, 00:34:49.080 "base_bdevs_list": [ 00:34:49.080 { 00:34:49.080 "name": "BaseBdev1", 00:34:49.080 "uuid": "08494ab0-a6b8-5854-a24c-64a60a90d439", 00:34:49.080 "is_configured": true, 00:34:49.080 "data_offset": 2048, 00:34:49.080 "data_size": 63488 00:34:49.080 }, 00:34:49.080 { 00:34:49.080 "name": "BaseBdev2", 00:34:49.080 "uuid": "42405a11-a3aa-5085-bdcb-15e629a10809", 00:34:49.080 "is_configured": true, 00:34:49.080 "data_offset": 2048, 00:34:49.080 "data_size": 63488 00:34:49.080 }, 00:34:49.080 { 00:34:49.080 "name": "BaseBdev3", 00:34:49.080 "uuid": "ae8e434a-dfae-5686-80a3-65da6eab7457", 00:34:49.080 "is_configured": true, 00:34:49.080 "data_offset": 2048, 00:34:49.080 "data_size": 63488 00:34:49.080 } 00:34:49.080 ] 00:34:49.080 }' 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:49.080 17:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.651 [2024-11-26 17:31:50.132964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:49.651 [2024-11-26 17:31:50.133102] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:49.651 [2024-11-26 17:31:50.136364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:49.651 [2024-11-26 17:31:50.136470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:49.651 [2024-11-26 17:31:50.136534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:49.651 [2024-11-26 17:31:50.136548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:49.651 { 00:34:49.651 "results": [ 00:34:49.651 { 00:34:49.651 "job": "raid_bdev1", 00:34:49.651 "core_mask": "0x1", 00:34:49.651 "workload": "randrw", 00:34:49.651 "percentage": 50, 00:34:49.651 "status": "finished", 00:34:49.651 "queue_depth": 1, 00:34:49.651 "io_size": 131072, 00:34:49.651 "runtime": 1.393507, 00:34:49.651 "iops": 13571.514172515817, 00:34:49.651 "mibps": 1696.4392715644772, 00:34:49.651 "io_failed": 1, 00:34:49.651 "io_timeout": 0, 00:34:49.651 "avg_latency_us": 101.69545690367546, 00:34:49.651 "min_latency_us": 27.165065502183406, 00:34:49.651 "max_latency_us": 1724.2550218340612 00:34:49.651 } 00:34:49.651 ], 00:34:49.651 "core_count": 1 00:34:49.651 } 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67494 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67494 ']' 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67494 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67494 00:34:49.651 killing process with pid 67494 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67494' 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67494 00:34:49.651 17:31:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67494 00:34:49.651 [2024-11-26 17:31:50.170855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:49.910 [2024-11-26 17:31:50.442708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:51.290 17:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:34:51.290 17:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mLedYW7hLT 00:34:51.290 17:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:34:51.290 17:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:34:51.290 17:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:34:51.290 17:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:51.290 17:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:34:51.290 17:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:34:51.290 00:34:51.290 real 0m4.970s 00:34:51.290 user 0m5.955s 00:34:51.290 sys 0m0.592s 00:34:51.290 ************************************ 00:34:51.290 END TEST raid_write_error_test 00:34:51.290 ************************************ 00:34:51.290 17:31:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.290 17:31:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:34:51.290 17:31:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:34:51.290 17:31:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:34:51.290 17:31:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:51.290 17:31:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.290 17:31:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:51.290 ************************************ 00:34:51.290 START TEST raid_state_function_test 00:34:51.290 ************************************ 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67639 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:51.290 Process raid pid: 67639 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67639' 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67639 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67639 ']' 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.290 17:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:51.550 [2024-11-26 17:31:51.998852] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:51.550 [2024-11-26 17:31:51.999076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:51.550 [2024-11-26 17:31:52.180721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.810 [2024-11-26 17:31:52.307140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.069 [2024-11-26 17:31:52.540115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:52.069 [2024-11-26 17:31:52.540234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.330 [2024-11-26 17:31:52.897718] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:52.330 [2024-11-26 17:31:52.897838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:52.330 [2024-11-26 17:31:52.897853] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:52.330 [2024-11-26 17:31:52.897881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:52.330 [2024-11-26 17:31:52.897888] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:52.330 [2024-11-26 17:31:52.897897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:52.330 "name": "Existed_Raid", 00:34:52.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.330 "strip_size_kb": 0, 00:34:52.330 "state": "configuring", 00:34:52.330 "raid_level": "raid1", 00:34:52.330 "superblock": false, 00:34:52.330 "num_base_bdevs": 3, 00:34:52.330 "num_base_bdevs_discovered": 0, 00:34:52.330 "num_base_bdevs_operational": 3, 00:34:52.330 "base_bdevs_list": [ 00:34:52.330 { 00:34:52.330 "name": "BaseBdev1", 00:34:52.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.330 "is_configured": false, 00:34:52.330 "data_offset": 0, 00:34:52.330 "data_size": 0 00:34:52.330 }, 00:34:52.330 { 00:34:52.330 "name": "BaseBdev2", 00:34:52.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.330 "is_configured": false, 00:34:52.330 "data_offset": 0, 00:34:52.330 "data_size": 0 00:34:52.330 }, 00:34:52.330 { 00:34:52.330 "name": "BaseBdev3", 00:34:52.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.330 "is_configured": false, 00:34:52.330 "data_offset": 0, 00:34:52.330 "data_size": 0 00:34:52.330 } 00:34:52.330 ] 00:34:52.330 }' 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:52.330 17:31:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.900 [2024-11-26 17:31:53.356876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:52.900 [2024-11-26 17:31:53.356971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.900 [2024-11-26 17:31:53.368857] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:52.900 [2024-11-26 17:31:53.368948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:52.900 [2024-11-26 17:31:53.368980] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:52.900 [2024-11-26 17:31:53.369006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:52.900 [2024-11-26 17:31:53.369027] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:52.900 [2024-11-26 17:31:53.369051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.900 [2024-11-26 17:31:53.419450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:52.900 BaseBdev1 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.900 [ 00:34:52.900 { 00:34:52.900 "name": "BaseBdev1", 00:34:52.900 "aliases": [ 00:34:52.900 "4bddb173-8206-4d59-bf10-532ec0705451" 00:34:52.900 ], 00:34:52.900 "product_name": "Malloc disk", 00:34:52.900 "block_size": 512, 00:34:52.900 "num_blocks": 65536, 00:34:52.900 "uuid": "4bddb173-8206-4d59-bf10-532ec0705451", 00:34:52.900 "assigned_rate_limits": { 00:34:52.900 "rw_ios_per_sec": 0, 00:34:52.900 "rw_mbytes_per_sec": 0, 00:34:52.900 "r_mbytes_per_sec": 0, 00:34:52.900 "w_mbytes_per_sec": 0 00:34:52.900 }, 00:34:52.900 "claimed": true, 00:34:52.900 "claim_type": "exclusive_write", 00:34:52.900 "zoned": false, 00:34:52.900 "supported_io_types": { 00:34:52.900 "read": true, 00:34:52.900 "write": true, 00:34:52.900 "unmap": true, 00:34:52.900 "flush": true, 00:34:52.900 "reset": true, 00:34:52.900 "nvme_admin": false, 00:34:52.900 "nvme_io": false, 00:34:52.900 "nvme_io_md": false, 00:34:52.900 "write_zeroes": true, 00:34:52.900 "zcopy": true, 00:34:52.900 "get_zone_info": false, 00:34:52.900 "zone_management": false, 00:34:52.900 "zone_append": false, 00:34:52.900 "compare": false, 00:34:52.900 "compare_and_write": false, 00:34:52.900 "abort": true, 00:34:52.900 "seek_hole": false, 00:34:52.900 "seek_data": false, 00:34:52.900 "copy": true, 00:34:52.900 "nvme_iov_md": false 00:34:52.900 }, 00:34:52.900 "memory_domains": [ 00:34:52.900 { 00:34:52.900 "dma_device_id": "system", 00:34:52.900 "dma_device_type": 1 00:34:52.900 }, 00:34:52.900 { 00:34:52.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:52.900 "dma_device_type": 2 00:34:52.900 } 00:34:52.900 ], 00:34:52.900 "driver_specific": {} 00:34:52.900 } 00:34:52.900 ] 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.900 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:52.900 "name": "Existed_Raid", 00:34:52.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.900 "strip_size_kb": 0, 00:34:52.900 "state": "configuring", 00:34:52.900 "raid_level": "raid1", 00:34:52.900 "superblock": false, 00:34:52.900 "num_base_bdevs": 3, 00:34:52.900 "num_base_bdevs_discovered": 1, 00:34:52.900 "num_base_bdevs_operational": 3, 00:34:52.900 "base_bdevs_list": [ 00:34:52.900 { 00:34:52.900 "name": "BaseBdev1", 00:34:52.900 "uuid": "4bddb173-8206-4d59-bf10-532ec0705451", 00:34:52.900 "is_configured": true, 00:34:52.900 "data_offset": 0, 00:34:52.900 "data_size": 65536 00:34:52.900 }, 00:34:52.900 { 00:34:52.900 "name": "BaseBdev2", 00:34:52.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.900 "is_configured": false, 00:34:52.900 "data_offset": 0, 00:34:52.900 "data_size": 0 00:34:52.900 }, 00:34:52.900 { 00:34:52.900 "name": "BaseBdev3", 00:34:52.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.900 "is_configured": false, 00:34:52.900 "data_offset": 0, 00:34:52.900 "data_size": 0 00:34:52.900 } 00:34:52.900 ] 00:34:52.901 }' 00:34:52.901 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:52.901 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:53.471 [2024-11-26 17:31:53.926685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:53.471 [2024-11-26 17:31:53.926749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:53.471 [2024-11-26 17:31:53.938707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:53.471 [2024-11-26 17:31:53.940799] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:53.471 [2024-11-26 17:31:53.940850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:53.471 [2024-11-26 17:31:53.940862] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:53.471 [2024-11-26 17:31:53.940872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.471 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:53.471 "name": "Existed_Raid", 00:34:53.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.472 "strip_size_kb": 0, 00:34:53.472 "state": "configuring", 00:34:53.472 "raid_level": "raid1", 00:34:53.472 "superblock": false, 00:34:53.472 "num_base_bdevs": 3, 00:34:53.472 "num_base_bdevs_discovered": 1, 00:34:53.472 "num_base_bdevs_operational": 3, 00:34:53.472 "base_bdevs_list": [ 00:34:53.472 { 00:34:53.472 "name": "BaseBdev1", 00:34:53.472 "uuid": "4bddb173-8206-4d59-bf10-532ec0705451", 00:34:53.472 "is_configured": true, 00:34:53.472 "data_offset": 0, 00:34:53.472 "data_size": 65536 00:34:53.472 }, 00:34:53.472 { 00:34:53.472 "name": "BaseBdev2", 00:34:53.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.472 "is_configured": false, 00:34:53.472 "data_offset": 0, 00:34:53.472 "data_size": 0 00:34:53.472 }, 00:34:53.472 { 00:34:53.472 "name": "BaseBdev3", 00:34:53.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.472 "is_configured": false, 00:34:53.472 "data_offset": 0, 00:34:53.472 "data_size": 0 00:34:53.472 } 00:34:53.472 ] 00:34:53.472 }' 00:34:53.472 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:53.472 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:53.731 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:53.732 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.732 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:53.991 [2024-11-26 17:31:54.426662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:53.991 BaseBdev2 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.991 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:53.991 [ 00:34:53.991 { 00:34:53.991 "name": "BaseBdev2", 00:34:53.991 "aliases": [ 00:34:53.991 "ab495c15-776e-4877-8eed-c84ecb9c0730" 00:34:53.991 ], 00:34:53.991 "product_name": "Malloc disk", 00:34:53.991 "block_size": 512, 00:34:53.991 "num_blocks": 65536, 00:34:53.991 "uuid": "ab495c15-776e-4877-8eed-c84ecb9c0730", 00:34:53.991 "assigned_rate_limits": { 00:34:53.991 "rw_ios_per_sec": 0, 00:34:53.991 "rw_mbytes_per_sec": 0, 00:34:53.991 "r_mbytes_per_sec": 0, 00:34:53.991 "w_mbytes_per_sec": 0 00:34:53.991 }, 00:34:53.991 "claimed": true, 00:34:53.991 "claim_type": "exclusive_write", 00:34:53.991 "zoned": false, 00:34:53.991 "supported_io_types": { 00:34:53.991 "read": true, 00:34:53.991 "write": true, 00:34:53.991 "unmap": true, 00:34:53.991 "flush": true, 00:34:53.991 "reset": true, 00:34:53.991 "nvme_admin": false, 00:34:53.991 "nvme_io": false, 00:34:53.991 "nvme_io_md": false, 00:34:53.991 "write_zeroes": true, 00:34:53.991 "zcopy": true, 00:34:53.991 "get_zone_info": false, 00:34:53.991 "zone_management": false, 00:34:53.991 "zone_append": false, 00:34:53.991 "compare": false, 00:34:53.991 "compare_and_write": false, 00:34:53.991 "abort": true, 00:34:53.991 "seek_hole": false, 00:34:53.991 "seek_data": false, 00:34:53.991 "copy": true, 00:34:53.991 "nvme_iov_md": false 00:34:53.991 }, 00:34:53.991 "memory_domains": [ 00:34:53.991 { 00:34:53.991 "dma_device_id": "system", 00:34:53.991 "dma_device_type": 1 00:34:53.991 }, 00:34:53.991 { 00:34:53.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:53.991 "dma_device_type": 2 00:34:53.991 } 00:34:53.991 ], 00:34:53.992 "driver_specific": {} 00:34:53.992 } 00:34:53.992 ] 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:53.992 "name": "Existed_Raid", 00:34:53.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.992 "strip_size_kb": 0, 00:34:53.992 "state": "configuring", 00:34:53.992 "raid_level": "raid1", 00:34:53.992 "superblock": false, 00:34:53.992 "num_base_bdevs": 3, 00:34:53.992 "num_base_bdevs_discovered": 2, 00:34:53.992 "num_base_bdevs_operational": 3, 00:34:53.992 "base_bdevs_list": [ 00:34:53.992 { 00:34:53.992 "name": "BaseBdev1", 00:34:53.992 "uuid": "4bddb173-8206-4d59-bf10-532ec0705451", 00:34:53.992 "is_configured": true, 00:34:53.992 "data_offset": 0, 00:34:53.992 "data_size": 65536 00:34:53.992 }, 00:34:53.992 { 00:34:53.992 "name": "BaseBdev2", 00:34:53.992 "uuid": "ab495c15-776e-4877-8eed-c84ecb9c0730", 00:34:53.992 "is_configured": true, 00:34:53.992 "data_offset": 0, 00:34:53.992 "data_size": 65536 00:34:53.992 }, 00:34:53.992 { 00:34:53.992 "name": "BaseBdev3", 00:34:53.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.992 "is_configured": false, 00:34:53.992 "data_offset": 0, 00:34:53.992 "data_size": 0 00:34:53.992 } 00:34:53.992 ] 00:34:53.992 }' 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:53.992 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.562 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:54.562 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.562 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.562 [2024-11-26 17:31:55.021090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:54.562 [2024-11-26 17:31:55.021149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:54.562 [2024-11-26 17:31:55.021164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:34:54.562 [2024-11-26 17:31:55.021476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:54.562 [2024-11-26 17:31:55.021705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:54.562 [2024-11-26 17:31:55.021717] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:54.562 [2024-11-26 17:31:55.022022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:54.562 BaseBdev3 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.562 [ 00:34:54.562 { 00:34:54.562 "name": "BaseBdev3", 00:34:54.562 "aliases": [ 00:34:54.562 "14cfec44-3528-4e05-891f-9c43de109e93" 00:34:54.562 ], 00:34:54.562 "product_name": "Malloc disk", 00:34:54.562 "block_size": 512, 00:34:54.562 "num_blocks": 65536, 00:34:54.562 "uuid": "14cfec44-3528-4e05-891f-9c43de109e93", 00:34:54.562 "assigned_rate_limits": { 00:34:54.562 "rw_ios_per_sec": 0, 00:34:54.562 "rw_mbytes_per_sec": 0, 00:34:54.562 "r_mbytes_per_sec": 0, 00:34:54.562 "w_mbytes_per_sec": 0 00:34:54.562 }, 00:34:54.562 "claimed": true, 00:34:54.562 "claim_type": "exclusive_write", 00:34:54.562 "zoned": false, 00:34:54.562 "supported_io_types": { 00:34:54.562 "read": true, 00:34:54.562 "write": true, 00:34:54.562 "unmap": true, 00:34:54.562 "flush": true, 00:34:54.562 "reset": true, 00:34:54.562 "nvme_admin": false, 00:34:54.562 "nvme_io": false, 00:34:54.562 "nvme_io_md": false, 00:34:54.562 "write_zeroes": true, 00:34:54.562 "zcopy": true, 00:34:54.562 "get_zone_info": false, 00:34:54.562 "zone_management": false, 00:34:54.562 "zone_append": false, 00:34:54.562 "compare": false, 00:34:54.562 "compare_and_write": false, 00:34:54.562 "abort": true, 00:34:54.562 "seek_hole": false, 00:34:54.562 "seek_data": false, 00:34:54.562 "copy": true, 00:34:54.562 "nvme_iov_md": false 00:34:54.562 }, 00:34:54.562 "memory_domains": [ 00:34:54.562 { 00:34:54.562 "dma_device_id": "system", 00:34:54.562 "dma_device_type": 1 00:34:54.562 }, 00:34:54.562 { 00:34:54.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:54.562 "dma_device_type": 2 00:34:54.562 } 00:34:54.562 ], 00:34:54.562 "driver_specific": {} 00:34:54.562 } 00:34:54.562 ] 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.562 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:54.562 "name": "Existed_Raid", 00:34:54.562 "uuid": "4afe0edb-e735-42c0-8e9d-c9bf0ae51245", 00:34:54.562 "strip_size_kb": 0, 00:34:54.562 "state": "online", 00:34:54.562 "raid_level": "raid1", 00:34:54.562 "superblock": false, 00:34:54.562 "num_base_bdevs": 3, 00:34:54.562 "num_base_bdevs_discovered": 3, 00:34:54.562 "num_base_bdevs_operational": 3, 00:34:54.562 "base_bdevs_list": [ 00:34:54.562 { 00:34:54.562 "name": "BaseBdev1", 00:34:54.562 "uuid": "4bddb173-8206-4d59-bf10-532ec0705451", 00:34:54.562 "is_configured": true, 00:34:54.562 "data_offset": 0, 00:34:54.562 "data_size": 65536 00:34:54.562 }, 00:34:54.562 { 00:34:54.563 "name": "BaseBdev2", 00:34:54.563 "uuid": "ab495c15-776e-4877-8eed-c84ecb9c0730", 00:34:54.563 "is_configured": true, 00:34:54.563 "data_offset": 0, 00:34:54.563 "data_size": 65536 00:34:54.563 }, 00:34:54.563 { 00:34:54.563 "name": "BaseBdev3", 00:34:54.563 "uuid": "14cfec44-3528-4e05-891f-9c43de109e93", 00:34:54.563 "is_configured": true, 00:34:54.563 "data_offset": 0, 00:34:54.563 "data_size": 65536 00:34:54.563 } 00:34:54.563 ] 00:34:54.563 }' 00:34:54.563 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:54.563 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.822 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:54.822 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:54.822 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:54.822 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:54.822 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:54.822 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:54.822 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:54.822 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:54.822 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.822 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.822 [2024-11-26 17:31:55.512744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:55.083 "name": "Existed_Raid", 00:34:55.083 "aliases": [ 00:34:55.083 "4afe0edb-e735-42c0-8e9d-c9bf0ae51245" 00:34:55.083 ], 00:34:55.083 "product_name": "Raid Volume", 00:34:55.083 "block_size": 512, 00:34:55.083 "num_blocks": 65536, 00:34:55.083 "uuid": "4afe0edb-e735-42c0-8e9d-c9bf0ae51245", 00:34:55.083 "assigned_rate_limits": { 00:34:55.083 "rw_ios_per_sec": 0, 00:34:55.083 "rw_mbytes_per_sec": 0, 00:34:55.083 "r_mbytes_per_sec": 0, 00:34:55.083 "w_mbytes_per_sec": 0 00:34:55.083 }, 00:34:55.083 "claimed": false, 00:34:55.083 "zoned": false, 00:34:55.083 "supported_io_types": { 00:34:55.083 "read": true, 00:34:55.083 "write": true, 00:34:55.083 "unmap": false, 00:34:55.083 "flush": false, 00:34:55.083 "reset": true, 00:34:55.083 "nvme_admin": false, 00:34:55.083 "nvme_io": false, 00:34:55.083 "nvme_io_md": false, 00:34:55.083 "write_zeroes": true, 00:34:55.083 "zcopy": false, 00:34:55.083 "get_zone_info": false, 00:34:55.083 "zone_management": false, 00:34:55.083 "zone_append": false, 00:34:55.083 "compare": false, 00:34:55.083 "compare_and_write": false, 00:34:55.083 "abort": false, 00:34:55.083 "seek_hole": false, 00:34:55.083 "seek_data": false, 00:34:55.083 "copy": false, 00:34:55.083 "nvme_iov_md": false 00:34:55.083 }, 00:34:55.083 "memory_domains": [ 00:34:55.083 { 00:34:55.083 "dma_device_id": "system", 00:34:55.083 "dma_device_type": 1 00:34:55.083 }, 00:34:55.083 { 00:34:55.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:55.083 "dma_device_type": 2 00:34:55.083 }, 00:34:55.083 { 00:34:55.083 "dma_device_id": "system", 00:34:55.083 "dma_device_type": 1 00:34:55.083 }, 00:34:55.083 { 00:34:55.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:55.083 "dma_device_type": 2 00:34:55.083 }, 00:34:55.083 { 00:34:55.083 "dma_device_id": "system", 00:34:55.083 "dma_device_type": 1 00:34:55.083 }, 00:34:55.083 { 00:34:55.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:55.083 "dma_device_type": 2 00:34:55.083 } 00:34:55.083 ], 00:34:55.083 "driver_specific": { 00:34:55.083 "raid": { 00:34:55.083 "uuid": "4afe0edb-e735-42c0-8e9d-c9bf0ae51245", 00:34:55.083 "strip_size_kb": 0, 00:34:55.083 "state": "online", 00:34:55.083 "raid_level": "raid1", 00:34:55.083 "superblock": false, 00:34:55.083 "num_base_bdevs": 3, 00:34:55.083 "num_base_bdevs_discovered": 3, 00:34:55.083 "num_base_bdevs_operational": 3, 00:34:55.083 "base_bdevs_list": [ 00:34:55.083 { 00:34:55.083 "name": "BaseBdev1", 00:34:55.083 "uuid": "4bddb173-8206-4d59-bf10-532ec0705451", 00:34:55.083 "is_configured": true, 00:34:55.083 "data_offset": 0, 00:34:55.083 "data_size": 65536 00:34:55.083 }, 00:34:55.083 { 00:34:55.083 "name": "BaseBdev2", 00:34:55.083 "uuid": "ab495c15-776e-4877-8eed-c84ecb9c0730", 00:34:55.083 "is_configured": true, 00:34:55.083 "data_offset": 0, 00:34:55.083 "data_size": 65536 00:34:55.083 }, 00:34:55.083 { 00:34:55.083 "name": "BaseBdev3", 00:34:55.083 "uuid": "14cfec44-3528-4e05-891f-9c43de109e93", 00:34:55.083 "is_configured": true, 00:34:55.083 "data_offset": 0, 00:34:55.083 "data_size": 65536 00:34:55.083 } 00:34:55.083 ] 00:34:55.083 } 00:34:55.083 } 00:34:55.083 }' 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:55.083 BaseBdev2 00:34:55.083 BaseBdev3' 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.083 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.343 [2024-11-26 17:31:55.811993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:55.343 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:55.344 "name": "Existed_Raid", 00:34:55.344 "uuid": "4afe0edb-e735-42c0-8e9d-c9bf0ae51245", 00:34:55.344 "strip_size_kb": 0, 00:34:55.344 "state": "online", 00:34:55.344 "raid_level": "raid1", 00:34:55.344 "superblock": false, 00:34:55.344 "num_base_bdevs": 3, 00:34:55.344 "num_base_bdevs_discovered": 2, 00:34:55.344 "num_base_bdevs_operational": 2, 00:34:55.344 "base_bdevs_list": [ 00:34:55.344 { 00:34:55.344 "name": null, 00:34:55.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.344 "is_configured": false, 00:34:55.344 "data_offset": 0, 00:34:55.344 "data_size": 65536 00:34:55.344 }, 00:34:55.344 { 00:34:55.344 "name": "BaseBdev2", 00:34:55.344 "uuid": "ab495c15-776e-4877-8eed-c84ecb9c0730", 00:34:55.344 "is_configured": true, 00:34:55.344 "data_offset": 0, 00:34:55.344 "data_size": 65536 00:34:55.344 }, 00:34:55.344 { 00:34:55.344 "name": "BaseBdev3", 00:34:55.344 "uuid": "14cfec44-3528-4e05-891f-9c43de109e93", 00:34:55.344 "is_configured": true, 00:34:55.344 "data_offset": 0, 00:34:55.344 "data_size": 65536 00:34:55.344 } 00:34:55.344 ] 00:34:55.344 }' 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:55.344 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:55.913 [2024-11-26 17:31:56.481770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:55.913 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.173 [2024-11-26 17:31:56.641827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:56.173 [2024-11-26 17:31:56.641927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:56.173 [2024-11-26 17:31:56.748667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:56.173 [2024-11-26 17:31:56.748785] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:56.173 [2024-11-26 17:31:56.748806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.173 BaseBdev2 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.173 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.434 [ 00:34:56.434 { 00:34:56.434 "name": "BaseBdev2", 00:34:56.434 "aliases": [ 00:34:56.434 "4a710a79-2a48-424d-8e27-df71e7ce7248" 00:34:56.434 ], 00:34:56.434 "product_name": "Malloc disk", 00:34:56.434 "block_size": 512, 00:34:56.434 "num_blocks": 65536, 00:34:56.434 "uuid": "4a710a79-2a48-424d-8e27-df71e7ce7248", 00:34:56.434 "assigned_rate_limits": { 00:34:56.434 "rw_ios_per_sec": 0, 00:34:56.434 "rw_mbytes_per_sec": 0, 00:34:56.434 "r_mbytes_per_sec": 0, 00:34:56.434 "w_mbytes_per_sec": 0 00:34:56.434 }, 00:34:56.434 "claimed": false, 00:34:56.434 "zoned": false, 00:34:56.434 "supported_io_types": { 00:34:56.434 "read": true, 00:34:56.434 "write": true, 00:34:56.434 "unmap": true, 00:34:56.434 "flush": true, 00:34:56.434 "reset": true, 00:34:56.434 "nvme_admin": false, 00:34:56.434 "nvme_io": false, 00:34:56.434 "nvme_io_md": false, 00:34:56.434 "write_zeroes": true, 00:34:56.434 "zcopy": true, 00:34:56.434 "get_zone_info": false, 00:34:56.434 "zone_management": false, 00:34:56.434 "zone_append": false, 00:34:56.434 "compare": false, 00:34:56.434 "compare_and_write": false, 00:34:56.434 "abort": true, 00:34:56.434 "seek_hole": false, 00:34:56.434 "seek_data": false, 00:34:56.434 "copy": true, 00:34:56.434 "nvme_iov_md": false 00:34:56.434 }, 00:34:56.434 "memory_domains": [ 00:34:56.434 { 00:34:56.434 "dma_device_id": "system", 00:34:56.434 "dma_device_type": 1 00:34:56.434 }, 00:34:56.434 { 00:34:56.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:56.434 "dma_device_type": 2 00:34:56.434 } 00:34:56.434 ], 00:34:56.434 "driver_specific": {} 00:34:56.434 } 00:34:56.434 ] 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.434 BaseBdev3 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.434 [ 00:34:56.434 { 00:34:56.434 "name": "BaseBdev3", 00:34:56.434 "aliases": [ 00:34:56.434 "6df80ff1-0125-4b8c-802c-899ae44896df" 00:34:56.434 ], 00:34:56.434 "product_name": "Malloc disk", 00:34:56.434 "block_size": 512, 00:34:56.434 "num_blocks": 65536, 00:34:56.434 "uuid": "6df80ff1-0125-4b8c-802c-899ae44896df", 00:34:56.434 "assigned_rate_limits": { 00:34:56.434 "rw_ios_per_sec": 0, 00:34:56.434 "rw_mbytes_per_sec": 0, 00:34:56.434 "r_mbytes_per_sec": 0, 00:34:56.434 "w_mbytes_per_sec": 0 00:34:56.434 }, 00:34:56.434 "claimed": false, 00:34:56.434 "zoned": false, 00:34:56.434 "supported_io_types": { 00:34:56.434 "read": true, 00:34:56.434 "write": true, 00:34:56.434 "unmap": true, 00:34:56.434 "flush": true, 00:34:56.434 "reset": true, 00:34:56.434 "nvme_admin": false, 00:34:56.434 "nvme_io": false, 00:34:56.434 "nvme_io_md": false, 00:34:56.434 "write_zeroes": true, 00:34:56.434 "zcopy": true, 00:34:56.434 "get_zone_info": false, 00:34:56.434 "zone_management": false, 00:34:56.434 "zone_append": false, 00:34:56.434 "compare": false, 00:34:56.434 "compare_and_write": false, 00:34:56.434 "abort": true, 00:34:56.434 "seek_hole": false, 00:34:56.434 "seek_data": false, 00:34:56.434 "copy": true, 00:34:56.434 "nvme_iov_md": false 00:34:56.434 }, 00:34:56.434 "memory_domains": [ 00:34:56.434 { 00:34:56.434 "dma_device_id": "system", 00:34:56.434 "dma_device_type": 1 00:34:56.434 }, 00:34:56.434 { 00:34:56.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:56.434 "dma_device_type": 2 00:34:56.434 } 00:34:56.434 ], 00:34:56.434 "driver_specific": {} 00:34:56.434 } 00:34:56.434 ] 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.434 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.435 [2024-11-26 17:31:56.981914] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:56.435 [2024-11-26 17:31:56.982030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:56.435 [2024-11-26 17:31:56.982082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:56.435 [2024-11-26 17:31:56.984119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.435 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.435 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.435 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:56.435 "name": "Existed_Raid", 00:34:56.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.435 "strip_size_kb": 0, 00:34:56.435 "state": "configuring", 00:34:56.435 "raid_level": "raid1", 00:34:56.435 "superblock": false, 00:34:56.435 "num_base_bdevs": 3, 00:34:56.435 "num_base_bdevs_discovered": 2, 00:34:56.435 "num_base_bdevs_operational": 3, 00:34:56.435 "base_bdevs_list": [ 00:34:56.435 { 00:34:56.435 "name": "BaseBdev1", 00:34:56.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.435 "is_configured": false, 00:34:56.435 "data_offset": 0, 00:34:56.435 "data_size": 0 00:34:56.435 }, 00:34:56.435 { 00:34:56.435 "name": "BaseBdev2", 00:34:56.435 "uuid": "4a710a79-2a48-424d-8e27-df71e7ce7248", 00:34:56.435 "is_configured": true, 00:34:56.435 "data_offset": 0, 00:34:56.435 "data_size": 65536 00:34:56.435 }, 00:34:56.435 { 00:34:56.435 "name": "BaseBdev3", 00:34:56.435 "uuid": "6df80ff1-0125-4b8c-802c-899ae44896df", 00:34:56.435 "is_configured": true, 00:34:56.435 "data_offset": 0, 00:34:56.435 "data_size": 65536 00:34:56.435 } 00:34:56.435 ] 00:34:56.435 }' 00:34:56.435 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:56.435 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.007 [2024-11-26 17:31:57.469199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:57.007 "name": "Existed_Raid", 00:34:57.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.007 "strip_size_kb": 0, 00:34:57.007 "state": "configuring", 00:34:57.007 "raid_level": "raid1", 00:34:57.007 "superblock": false, 00:34:57.007 "num_base_bdevs": 3, 00:34:57.007 "num_base_bdevs_discovered": 1, 00:34:57.007 "num_base_bdevs_operational": 3, 00:34:57.007 "base_bdevs_list": [ 00:34:57.007 { 00:34:57.007 "name": "BaseBdev1", 00:34:57.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.007 "is_configured": false, 00:34:57.007 "data_offset": 0, 00:34:57.007 "data_size": 0 00:34:57.007 }, 00:34:57.007 { 00:34:57.007 "name": null, 00:34:57.007 "uuid": "4a710a79-2a48-424d-8e27-df71e7ce7248", 00:34:57.007 "is_configured": false, 00:34:57.007 "data_offset": 0, 00:34:57.007 "data_size": 65536 00:34:57.007 }, 00:34:57.007 { 00:34:57.007 "name": "BaseBdev3", 00:34:57.007 "uuid": "6df80ff1-0125-4b8c-802c-899ae44896df", 00:34:57.007 "is_configured": true, 00:34:57.007 "data_offset": 0, 00:34:57.007 "data_size": 65536 00:34:57.007 } 00:34:57.007 ] 00:34:57.007 }' 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:57.007 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.268 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:57.268 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:57.268 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.268 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.528 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.528 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:34:57.528 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:57.528 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.528 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.528 [2024-11-26 17:31:58.043898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:57.528 BaseBdev1 00:34:57.528 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.528 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:34:57.528 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:57.528 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.529 [ 00:34:57.529 { 00:34:57.529 "name": "BaseBdev1", 00:34:57.529 "aliases": [ 00:34:57.529 "587d4f03-8303-4cca-9557-aaba2e412cfe" 00:34:57.529 ], 00:34:57.529 "product_name": "Malloc disk", 00:34:57.529 "block_size": 512, 00:34:57.529 "num_blocks": 65536, 00:34:57.529 "uuid": "587d4f03-8303-4cca-9557-aaba2e412cfe", 00:34:57.529 "assigned_rate_limits": { 00:34:57.529 "rw_ios_per_sec": 0, 00:34:57.529 "rw_mbytes_per_sec": 0, 00:34:57.529 "r_mbytes_per_sec": 0, 00:34:57.529 "w_mbytes_per_sec": 0 00:34:57.529 }, 00:34:57.529 "claimed": true, 00:34:57.529 "claim_type": "exclusive_write", 00:34:57.529 "zoned": false, 00:34:57.529 "supported_io_types": { 00:34:57.529 "read": true, 00:34:57.529 "write": true, 00:34:57.529 "unmap": true, 00:34:57.529 "flush": true, 00:34:57.529 "reset": true, 00:34:57.529 "nvme_admin": false, 00:34:57.529 "nvme_io": false, 00:34:57.529 "nvme_io_md": false, 00:34:57.529 "write_zeroes": true, 00:34:57.529 "zcopy": true, 00:34:57.529 "get_zone_info": false, 00:34:57.529 "zone_management": false, 00:34:57.529 "zone_append": false, 00:34:57.529 "compare": false, 00:34:57.529 "compare_and_write": false, 00:34:57.529 "abort": true, 00:34:57.529 "seek_hole": false, 00:34:57.529 "seek_data": false, 00:34:57.529 "copy": true, 00:34:57.529 "nvme_iov_md": false 00:34:57.529 }, 00:34:57.529 "memory_domains": [ 00:34:57.529 { 00:34:57.529 "dma_device_id": "system", 00:34:57.529 "dma_device_type": 1 00:34:57.529 }, 00:34:57.529 { 00:34:57.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:57.529 "dma_device_type": 2 00:34:57.529 } 00:34:57.529 ], 00:34:57.529 "driver_specific": {} 00:34:57.529 } 00:34:57.529 ] 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:57.529 "name": "Existed_Raid", 00:34:57.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.529 "strip_size_kb": 0, 00:34:57.529 "state": "configuring", 00:34:57.529 "raid_level": "raid1", 00:34:57.529 "superblock": false, 00:34:57.529 "num_base_bdevs": 3, 00:34:57.529 "num_base_bdevs_discovered": 2, 00:34:57.529 "num_base_bdevs_operational": 3, 00:34:57.529 "base_bdevs_list": [ 00:34:57.529 { 00:34:57.529 "name": "BaseBdev1", 00:34:57.529 "uuid": "587d4f03-8303-4cca-9557-aaba2e412cfe", 00:34:57.529 "is_configured": true, 00:34:57.529 "data_offset": 0, 00:34:57.529 "data_size": 65536 00:34:57.529 }, 00:34:57.529 { 00:34:57.529 "name": null, 00:34:57.529 "uuid": "4a710a79-2a48-424d-8e27-df71e7ce7248", 00:34:57.529 "is_configured": false, 00:34:57.529 "data_offset": 0, 00:34:57.529 "data_size": 65536 00:34:57.529 }, 00:34:57.529 { 00:34:57.529 "name": "BaseBdev3", 00:34:57.529 "uuid": "6df80ff1-0125-4b8c-802c-899ae44896df", 00:34:57.529 "is_configured": true, 00:34:57.529 "data_offset": 0, 00:34:57.529 "data_size": 65536 00:34:57.529 } 00:34:57.529 ] 00:34:57.529 }' 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:57.529 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.099 [2024-11-26 17:31:58.607068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:58.099 "name": "Existed_Raid", 00:34:58.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.099 "strip_size_kb": 0, 00:34:58.099 "state": "configuring", 00:34:58.099 "raid_level": "raid1", 00:34:58.099 "superblock": false, 00:34:58.099 "num_base_bdevs": 3, 00:34:58.099 "num_base_bdevs_discovered": 1, 00:34:58.099 "num_base_bdevs_operational": 3, 00:34:58.099 "base_bdevs_list": [ 00:34:58.099 { 00:34:58.099 "name": "BaseBdev1", 00:34:58.099 "uuid": "587d4f03-8303-4cca-9557-aaba2e412cfe", 00:34:58.099 "is_configured": true, 00:34:58.099 "data_offset": 0, 00:34:58.099 "data_size": 65536 00:34:58.099 }, 00:34:58.099 { 00:34:58.099 "name": null, 00:34:58.099 "uuid": "4a710a79-2a48-424d-8e27-df71e7ce7248", 00:34:58.099 "is_configured": false, 00:34:58.099 "data_offset": 0, 00:34:58.099 "data_size": 65536 00:34:58.099 }, 00:34:58.099 { 00:34:58.099 "name": null, 00:34:58.099 "uuid": "6df80ff1-0125-4b8c-802c-899ae44896df", 00:34:58.099 "is_configured": false, 00:34:58.099 "data_offset": 0, 00:34:58.099 "data_size": 65536 00:34:58.099 } 00:34:58.099 ] 00:34:58.099 }' 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:58.099 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.669 [2024-11-26 17:31:59.142216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:58.669 "name": "Existed_Raid", 00:34:58.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.669 "strip_size_kb": 0, 00:34:58.669 "state": "configuring", 00:34:58.669 "raid_level": "raid1", 00:34:58.669 "superblock": false, 00:34:58.669 "num_base_bdevs": 3, 00:34:58.669 "num_base_bdevs_discovered": 2, 00:34:58.669 "num_base_bdevs_operational": 3, 00:34:58.669 "base_bdevs_list": [ 00:34:58.669 { 00:34:58.669 "name": "BaseBdev1", 00:34:58.669 "uuid": "587d4f03-8303-4cca-9557-aaba2e412cfe", 00:34:58.669 "is_configured": true, 00:34:58.669 "data_offset": 0, 00:34:58.669 "data_size": 65536 00:34:58.669 }, 00:34:58.669 { 00:34:58.669 "name": null, 00:34:58.669 "uuid": "4a710a79-2a48-424d-8e27-df71e7ce7248", 00:34:58.669 "is_configured": false, 00:34:58.669 "data_offset": 0, 00:34:58.669 "data_size": 65536 00:34:58.669 }, 00:34:58.669 { 00:34:58.669 "name": "BaseBdev3", 00:34:58.669 "uuid": "6df80ff1-0125-4b8c-802c-899ae44896df", 00:34:58.669 "is_configured": true, 00:34:58.669 "data_offset": 0, 00:34:58.669 "data_size": 65536 00:34:58.669 } 00:34:58.669 ] 00:34:58.669 }' 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:58.669 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.927 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.927 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.927 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.927 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:58.927 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.927 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:34:58.927 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:58.927 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.927 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.927 [2024-11-26 17:31:59.605491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:59.187 "name": "Existed_Raid", 00:34:59.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.187 "strip_size_kb": 0, 00:34:59.187 "state": "configuring", 00:34:59.187 "raid_level": "raid1", 00:34:59.187 "superblock": false, 00:34:59.187 "num_base_bdevs": 3, 00:34:59.187 "num_base_bdevs_discovered": 1, 00:34:59.187 "num_base_bdevs_operational": 3, 00:34:59.187 "base_bdevs_list": [ 00:34:59.187 { 00:34:59.187 "name": null, 00:34:59.187 "uuid": "587d4f03-8303-4cca-9557-aaba2e412cfe", 00:34:59.187 "is_configured": false, 00:34:59.187 "data_offset": 0, 00:34:59.187 "data_size": 65536 00:34:59.187 }, 00:34:59.187 { 00:34:59.187 "name": null, 00:34:59.187 "uuid": "4a710a79-2a48-424d-8e27-df71e7ce7248", 00:34:59.187 "is_configured": false, 00:34:59.187 "data_offset": 0, 00:34:59.187 "data_size": 65536 00:34:59.187 }, 00:34:59.187 { 00:34:59.187 "name": "BaseBdev3", 00:34:59.187 "uuid": "6df80ff1-0125-4b8c-802c-899ae44896df", 00:34:59.187 "is_configured": true, 00:34:59.187 "data_offset": 0, 00:34:59.187 "data_size": 65536 00:34:59.187 } 00:34:59.187 ] 00:34:59.187 }' 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:59.187 17:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.773 [2024-11-26 17:32:00.230874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.773 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:59.773 "name": "Existed_Raid", 00:34:59.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.773 "strip_size_kb": 0, 00:34:59.773 "state": "configuring", 00:34:59.773 "raid_level": "raid1", 00:34:59.773 "superblock": false, 00:34:59.773 "num_base_bdevs": 3, 00:34:59.773 "num_base_bdevs_discovered": 2, 00:34:59.773 "num_base_bdevs_operational": 3, 00:34:59.773 "base_bdevs_list": [ 00:34:59.773 { 00:34:59.773 "name": null, 00:34:59.773 "uuid": "587d4f03-8303-4cca-9557-aaba2e412cfe", 00:34:59.773 "is_configured": false, 00:34:59.773 "data_offset": 0, 00:34:59.773 "data_size": 65536 00:34:59.773 }, 00:34:59.773 { 00:34:59.773 "name": "BaseBdev2", 00:34:59.773 "uuid": "4a710a79-2a48-424d-8e27-df71e7ce7248", 00:34:59.773 "is_configured": true, 00:34:59.773 "data_offset": 0, 00:34:59.773 "data_size": 65536 00:34:59.773 }, 00:34:59.773 { 00:34:59.773 "name": "BaseBdev3", 00:34:59.773 "uuid": "6df80ff1-0125-4b8c-802c-899ae44896df", 00:34:59.773 "is_configured": true, 00:34:59.773 "data_offset": 0, 00:34:59.773 "data_size": 65536 00:34:59.773 } 00:34:59.773 ] 00:34:59.773 }' 00:34:59.774 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:59.774 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.033 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.033 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:00.033 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.034 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.034 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.034 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:35:00.034 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:00.034 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.034 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.034 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.034 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 587d4f03-8303-4cca-9557-aaba2e412cfe 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.293 [2024-11-26 17:32:00.791154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:00.293 [2024-11-26 17:32:00.791223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:00.293 [2024-11-26 17:32:00.791232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:35:00.293 [2024-11-26 17:32:00.791501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:00.293 [2024-11-26 17:32:00.791675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:00.293 [2024-11-26 17:32:00.791687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:35:00.293 [2024-11-26 17:32:00.792005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:00.293 NewBaseBdev 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.293 [ 00:35:00.293 { 00:35:00.293 "name": "NewBaseBdev", 00:35:00.293 "aliases": [ 00:35:00.293 "587d4f03-8303-4cca-9557-aaba2e412cfe" 00:35:00.293 ], 00:35:00.293 "product_name": "Malloc disk", 00:35:00.293 "block_size": 512, 00:35:00.293 "num_blocks": 65536, 00:35:00.293 "uuid": "587d4f03-8303-4cca-9557-aaba2e412cfe", 00:35:00.293 "assigned_rate_limits": { 00:35:00.293 "rw_ios_per_sec": 0, 00:35:00.293 "rw_mbytes_per_sec": 0, 00:35:00.293 "r_mbytes_per_sec": 0, 00:35:00.293 "w_mbytes_per_sec": 0 00:35:00.293 }, 00:35:00.293 "claimed": true, 00:35:00.293 "claim_type": "exclusive_write", 00:35:00.293 "zoned": false, 00:35:00.293 "supported_io_types": { 00:35:00.293 "read": true, 00:35:00.293 "write": true, 00:35:00.293 "unmap": true, 00:35:00.293 "flush": true, 00:35:00.293 "reset": true, 00:35:00.293 "nvme_admin": false, 00:35:00.293 "nvme_io": false, 00:35:00.293 "nvme_io_md": false, 00:35:00.293 "write_zeroes": true, 00:35:00.293 "zcopy": true, 00:35:00.293 "get_zone_info": false, 00:35:00.293 "zone_management": false, 00:35:00.293 "zone_append": false, 00:35:00.293 "compare": false, 00:35:00.293 "compare_and_write": false, 00:35:00.293 "abort": true, 00:35:00.293 "seek_hole": false, 00:35:00.293 "seek_data": false, 00:35:00.293 "copy": true, 00:35:00.293 "nvme_iov_md": false 00:35:00.293 }, 00:35:00.293 "memory_domains": [ 00:35:00.293 { 00:35:00.293 "dma_device_id": "system", 00:35:00.293 "dma_device_type": 1 00:35:00.293 }, 00:35:00.293 { 00:35:00.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.293 "dma_device_type": 2 00:35:00.293 } 00:35:00.293 ], 00:35:00.293 "driver_specific": {} 00:35:00.293 } 00:35:00.293 ] 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:00.293 "name": "Existed_Raid", 00:35:00.293 "uuid": "f85c8ce4-8f9b-41e8-abeb-3eaf172031a4", 00:35:00.293 "strip_size_kb": 0, 00:35:00.293 "state": "online", 00:35:00.293 "raid_level": "raid1", 00:35:00.293 "superblock": false, 00:35:00.293 "num_base_bdevs": 3, 00:35:00.293 "num_base_bdevs_discovered": 3, 00:35:00.293 "num_base_bdevs_operational": 3, 00:35:00.293 "base_bdevs_list": [ 00:35:00.293 { 00:35:00.293 "name": "NewBaseBdev", 00:35:00.293 "uuid": "587d4f03-8303-4cca-9557-aaba2e412cfe", 00:35:00.293 "is_configured": true, 00:35:00.293 "data_offset": 0, 00:35:00.293 "data_size": 65536 00:35:00.293 }, 00:35:00.293 { 00:35:00.293 "name": "BaseBdev2", 00:35:00.293 "uuid": "4a710a79-2a48-424d-8e27-df71e7ce7248", 00:35:00.293 "is_configured": true, 00:35:00.293 "data_offset": 0, 00:35:00.293 "data_size": 65536 00:35:00.293 }, 00:35:00.293 { 00:35:00.293 "name": "BaseBdev3", 00:35:00.293 "uuid": "6df80ff1-0125-4b8c-802c-899ae44896df", 00:35:00.293 "is_configured": true, 00:35:00.293 "data_offset": 0, 00:35:00.293 "data_size": 65536 00:35:00.293 } 00:35:00.293 ] 00:35:00.293 }' 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:00.293 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.861 [2024-11-26 17:32:01.278708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.861 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:00.861 "name": "Existed_Raid", 00:35:00.861 "aliases": [ 00:35:00.861 "f85c8ce4-8f9b-41e8-abeb-3eaf172031a4" 00:35:00.861 ], 00:35:00.861 "product_name": "Raid Volume", 00:35:00.861 "block_size": 512, 00:35:00.861 "num_blocks": 65536, 00:35:00.861 "uuid": "f85c8ce4-8f9b-41e8-abeb-3eaf172031a4", 00:35:00.861 "assigned_rate_limits": { 00:35:00.861 "rw_ios_per_sec": 0, 00:35:00.861 "rw_mbytes_per_sec": 0, 00:35:00.861 "r_mbytes_per_sec": 0, 00:35:00.861 "w_mbytes_per_sec": 0 00:35:00.861 }, 00:35:00.861 "claimed": false, 00:35:00.861 "zoned": false, 00:35:00.861 "supported_io_types": { 00:35:00.861 "read": true, 00:35:00.861 "write": true, 00:35:00.861 "unmap": false, 00:35:00.861 "flush": false, 00:35:00.861 "reset": true, 00:35:00.861 "nvme_admin": false, 00:35:00.861 "nvme_io": false, 00:35:00.861 "nvme_io_md": false, 00:35:00.861 "write_zeroes": true, 00:35:00.861 "zcopy": false, 00:35:00.861 "get_zone_info": false, 00:35:00.861 "zone_management": false, 00:35:00.861 "zone_append": false, 00:35:00.861 "compare": false, 00:35:00.861 "compare_and_write": false, 00:35:00.861 "abort": false, 00:35:00.861 "seek_hole": false, 00:35:00.861 "seek_data": false, 00:35:00.861 "copy": false, 00:35:00.861 "nvme_iov_md": false 00:35:00.861 }, 00:35:00.861 "memory_domains": [ 00:35:00.861 { 00:35:00.861 "dma_device_id": "system", 00:35:00.861 "dma_device_type": 1 00:35:00.861 }, 00:35:00.861 { 00:35:00.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.861 "dma_device_type": 2 00:35:00.861 }, 00:35:00.861 { 00:35:00.861 "dma_device_id": "system", 00:35:00.861 "dma_device_type": 1 00:35:00.861 }, 00:35:00.861 { 00:35:00.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.861 "dma_device_type": 2 00:35:00.861 }, 00:35:00.861 { 00:35:00.861 "dma_device_id": "system", 00:35:00.861 "dma_device_type": 1 00:35:00.861 }, 00:35:00.861 { 00:35:00.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.861 "dma_device_type": 2 00:35:00.861 } 00:35:00.861 ], 00:35:00.861 "driver_specific": { 00:35:00.861 "raid": { 00:35:00.861 "uuid": "f85c8ce4-8f9b-41e8-abeb-3eaf172031a4", 00:35:00.861 "strip_size_kb": 0, 00:35:00.861 "state": "online", 00:35:00.861 "raid_level": "raid1", 00:35:00.861 "superblock": false, 00:35:00.861 "num_base_bdevs": 3, 00:35:00.861 "num_base_bdevs_discovered": 3, 00:35:00.861 "num_base_bdevs_operational": 3, 00:35:00.861 "base_bdevs_list": [ 00:35:00.861 { 00:35:00.861 "name": "NewBaseBdev", 00:35:00.861 "uuid": "587d4f03-8303-4cca-9557-aaba2e412cfe", 00:35:00.861 "is_configured": true, 00:35:00.861 "data_offset": 0, 00:35:00.861 "data_size": 65536 00:35:00.861 }, 00:35:00.861 { 00:35:00.861 "name": "BaseBdev2", 00:35:00.862 "uuid": "4a710a79-2a48-424d-8e27-df71e7ce7248", 00:35:00.862 "is_configured": true, 00:35:00.862 "data_offset": 0, 00:35:00.862 "data_size": 65536 00:35:00.862 }, 00:35:00.862 { 00:35:00.862 "name": "BaseBdev3", 00:35:00.862 "uuid": "6df80ff1-0125-4b8c-802c-899ae44896df", 00:35:00.862 "is_configured": true, 00:35:00.862 "data_offset": 0, 00:35:00.862 "data_size": 65536 00:35:00.862 } 00:35:00.862 ] 00:35:00.862 } 00:35:00.862 } 00:35:00.862 }' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:35:00.862 BaseBdev2 00:35:00.862 BaseBdev3' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.862 [2024-11-26 17:32:01.537944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:00.862 [2024-11-26 17:32:01.538029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:00.862 [2024-11-26 17:32:01.538117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:00.862 [2024-11-26 17:32:01.538444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:00.862 [2024-11-26 17:32:01.538454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67639 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67639 ']' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67639 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:00.862 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67639 00:35:01.121 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:01.121 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:01.121 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67639' 00:35:01.121 killing process with pid 67639 00:35:01.121 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67639 00:35:01.121 [2024-11-26 17:32:01.570219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:01.121 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67639 00:35:01.437 [2024-11-26 17:32:01.887324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:02.812 17:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:35:02.813 ************************************ 00:35:02.813 END TEST raid_state_function_test 00:35:02.813 ************************************ 00:35:02.813 00:35:02.813 real 0m11.199s 00:35:02.813 user 0m17.819s 00:35:02.813 sys 0m1.932s 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.813 17:32:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:35:02.813 17:32:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:02.813 17:32:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:02.813 17:32:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:02.813 ************************************ 00:35:02.813 START TEST raid_state_function_test_sb 00:35:02.813 ************************************ 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:35:02.813 Process raid pid: 68266 00:35:02.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68266 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68266' 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68266 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68266 ']' 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:02.813 17:32:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:02.813 [2024-11-26 17:32:03.254177] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:02.813 [2024-11-26 17:32:03.254382] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.813 [2024-11-26 17:32:03.416960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.071 [2024-11-26 17:32:03.552011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.071 [2024-11-26 17:32:03.761633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:03.072 [2024-11-26 17:32:03.761752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:03.641 [2024-11-26 17:32:04.167228] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:03.641 [2024-11-26 17:32:04.167381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:03.641 [2024-11-26 17:32:04.167441] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:03.641 [2024-11-26 17:32:04.167478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:03.641 [2024-11-26 17:32:04.167539] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:03.641 [2024-11-26 17:32:04.167583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:03.641 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:03.642 "name": "Existed_Raid", 00:35:03.642 "uuid": "fdb6a403-b91b-4a43-bf86-4ec5b50c4553", 00:35:03.642 "strip_size_kb": 0, 00:35:03.642 "state": "configuring", 00:35:03.642 "raid_level": "raid1", 00:35:03.642 "superblock": true, 00:35:03.642 "num_base_bdevs": 3, 00:35:03.642 "num_base_bdevs_discovered": 0, 00:35:03.642 "num_base_bdevs_operational": 3, 00:35:03.642 "base_bdevs_list": [ 00:35:03.642 { 00:35:03.642 "name": "BaseBdev1", 00:35:03.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:03.642 "is_configured": false, 00:35:03.642 "data_offset": 0, 00:35:03.642 "data_size": 0 00:35:03.642 }, 00:35:03.642 { 00:35:03.642 "name": "BaseBdev2", 00:35:03.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:03.642 "is_configured": false, 00:35:03.642 "data_offset": 0, 00:35:03.642 "data_size": 0 00:35:03.642 }, 00:35:03.642 { 00:35:03.642 "name": "BaseBdev3", 00:35:03.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:03.642 "is_configured": false, 00:35:03.642 "data_offset": 0, 00:35:03.642 "data_size": 0 00:35:03.642 } 00:35:03.642 ] 00:35:03.642 }' 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:03.642 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:03.901 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:03.901 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.901 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.160 [2024-11-26 17:32:04.598386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:04.160 [2024-11-26 17:32:04.598490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.160 [2024-11-26 17:32:04.610380] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:04.160 [2024-11-26 17:32:04.610431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:04.160 [2024-11-26 17:32:04.610442] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:04.160 [2024-11-26 17:32:04.610451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:04.160 [2024-11-26 17:32:04.610458] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:04.160 [2024-11-26 17:32:04.610467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.160 [2024-11-26 17:32:04.663494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:04.160 BaseBdev1 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:04.160 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.161 [ 00:35:04.161 { 00:35:04.161 "name": "BaseBdev1", 00:35:04.161 "aliases": [ 00:35:04.161 "7858f2cc-18d8-4a19-bd07-ce6e0a8dc056" 00:35:04.161 ], 00:35:04.161 "product_name": "Malloc disk", 00:35:04.161 "block_size": 512, 00:35:04.161 "num_blocks": 65536, 00:35:04.161 "uuid": "7858f2cc-18d8-4a19-bd07-ce6e0a8dc056", 00:35:04.161 "assigned_rate_limits": { 00:35:04.161 "rw_ios_per_sec": 0, 00:35:04.161 "rw_mbytes_per_sec": 0, 00:35:04.161 "r_mbytes_per_sec": 0, 00:35:04.161 "w_mbytes_per_sec": 0 00:35:04.161 }, 00:35:04.161 "claimed": true, 00:35:04.161 "claim_type": "exclusive_write", 00:35:04.161 "zoned": false, 00:35:04.161 "supported_io_types": { 00:35:04.161 "read": true, 00:35:04.161 "write": true, 00:35:04.161 "unmap": true, 00:35:04.161 "flush": true, 00:35:04.161 "reset": true, 00:35:04.161 "nvme_admin": false, 00:35:04.161 "nvme_io": false, 00:35:04.161 "nvme_io_md": false, 00:35:04.161 "write_zeroes": true, 00:35:04.161 "zcopy": true, 00:35:04.161 "get_zone_info": false, 00:35:04.161 "zone_management": false, 00:35:04.161 "zone_append": false, 00:35:04.161 "compare": false, 00:35:04.161 "compare_and_write": false, 00:35:04.161 "abort": true, 00:35:04.161 "seek_hole": false, 00:35:04.161 "seek_data": false, 00:35:04.161 "copy": true, 00:35:04.161 "nvme_iov_md": false 00:35:04.161 }, 00:35:04.161 "memory_domains": [ 00:35:04.161 { 00:35:04.161 "dma_device_id": "system", 00:35:04.161 "dma_device_type": 1 00:35:04.161 }, 00:35:04.161 { 00:35:04.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:04.161 "dma_device_type": 2 00:35:04.161 } 00:35:04.161 ], 00:35:04.161 "driver_specific": {} 00:35:04.161 } 00:35:04.161 ] 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:04.161 "name": "Existed_Raid", 00:35:04.161 "uuid": "8384afc7-79c4-45f6-a6de-23fe91de1317", 00:35:04.161 "strip_size_kb": 0, 00:35:04.161 "state": "configuring", 00:35:04.161 "raid_level": "raid1", 00:35:04.161 "superblock": true, 00:35:04.161 "num_base_bdevs": 3, 00:35:04.161 "num_base_bdevs_discovered": 1, 00:35:04.161 "num_base_bdevs_operational": 3, 00:35:04.161 "base_bdevs_list": [ 00:35:04.161 { 00:35:04.161 "name": "BaseBdev1", 00:35:04.161 "uuid": "7858f2cc-18d8-4a19-bd07-ce6e0a8dc056", 00:35:04.161 "is_configured": true, 00:35:04.161 "data_offset": 2048, 00:35:04.161 "data_size": 63488 00:35:04.161 }, 00:35:04.161 { 00:35:04.161 "name": "BaseBdev2", 00:35:04.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.161 "is_configured": false, 00:35:04.161 "data_offset": 0, 00:35:04.161 "data_size": 0 00:35:04.161 }, 00:35:04.161 { 00:35:04.161 "name": "BaseBdev3", 00:35:04.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.161 "is_configured": false, 00:35:04.161 "data_offset": 0, 00:35:04.161 "data_size": 0 00:35:04.161 } 00:35:04.161 ] 00:35:04.161 }' 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:04.161 17:32:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.729 [2024-11-26 17:32:05.134765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:04.729 [2024-11-26 17:32:05.134910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.729 [2024-11-26 17:32:05.146847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:04.729 [2024-11-26 17:32:05.148811] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:04.729 [2024-11-26 17:32:05.148862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:04.729 [2024-11-26 17:32:05.148873] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:04.729 [2024-11-26 17:32:05.148882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:04.729 "name": "Existed_Raid", 00:35:04.729 "uuid": "3357569f-1cff-41d3-b727-121229096b0e", 00:35:04.729 "strip_size_kb": 0, 00:35:04.729 "state": "configuring", 00:35:04.729 "raid_level": "raid1", 00:35:04.729 "superblock": true, 00:35:04.729 "num_base_bdevs": 3, 00:35:04.729 "num_base_bdevs_discovered": 1, 00:35:04.729 "num_base_bdevs_operational": 3, 00:35:04.729 "base_bdevs_list": [ 00:35:04.729 { 00:35:04.729 "name": "BaseBdev1", 00:35:04.729 "uuid": "7858f2cc-18d8-4a19-bd07-ce6e0a8dc056", 00:35:04.729 "is_configured": true, 00:35:04.729 "data_offset": 2048, 00:35:04.729 "data_size": 63488 00:35:04.729 }, 00:35:04.729 { 00:35:04.729 "name": "BaseBdev2", 00:35:04.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.729 "is_configured": false, 00:35:04.729 "data_offset": 0, 00:35:04.729 "data_size": 0 00:35:04.729 }, 00:35:04.729 { 00:35:04.729 "name": "BaseBdev3", 00:35:04.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.729 "is_configured": false, 00:35:04.729 "data_offset": 0, 00:35:04.729 "data_size": 0 00:35:04.729 } 00:35:04.729 ] 00:35:04.729 }' 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:04.729 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.988 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:04.988 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.988 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.988 [2024-11-26 17:32:05.650440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:04.988 BaseBdev2 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.989 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.248 [ 00:35:05.248 { 00:35:05.248 "name": "BaseBdev2", 00:35:05.248 "aliases": [ 00:35:05.248 "63d8ce09-b7ba-42a2-a991-0c6d2ea56605" 00:35:05.248 ], 00:35:05.248 "product_name": "Malloc disk", 00:35:05.248 "block_size": 512, 00:35:05.248 "num_blocks": 65536, 00:35:05.248 "uuid": "63d8ce09-b7ba-42a2-a991-0c6d2ea56605", 00:35:05.248 "assigned_rate_limits": { 00:35:05.248 "rw_ios_per_sec": 0, 00:35:05.248 "rw_mbytes_per_sec": 0, 00:35:05.248 "r_mbytes_per_sec": 0, 00:35:05.248 "w_mbytes_per_sec": 0 00:35:05.248 }, 00:35:05.248 "claimed": true, 00:35:05.248 "claim_type": "exclusive_write", 00:35:05.248 "zoned": false, 00:35:05.248 "supported_io_types": { 00:35:05.248 "read": true, 00:35:05.248 "write": true, 00:35:05.248 "unmap": true, 00:35:05.248 "flush": true, 00:35:05.248 "reset": true, 00:35:05.248 "nvme_admin": false, 00:35:05.248 "nvme_io": false, 00:35:05.248 "nvme_io_md": false, 00:35:05.248 "write_zeroes": true, 00:35:05.248 "zcopy": true, 00:35:05.248 "get_zone_info": false, 00:35:05.248 "zone_management": false, 00:35:05.248 "zone_append": false, 00:35:05.248 "compare": false, 00:35:05.248 "compare_and_write": false, 00:35:05.248 "abort": true, 00:35:05.248 "seek_hole": false, 00:35:05.248 "seek_data": false, 00:35:05.248 "copy": true, 00:35:05.248 "nvme_iov_md": false 00:35:05.248 }, 00:35:05.248 "memory_domains": [ 00:35:05.248 { 00:35:05.248 "dma_device_id": "system", 00:35:05.248 "dma_device_type": 1 00:35:05.248 }, 00:35:05.248 { 00:35:05.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:05.248 "dma_device_type": 2 00:35:05.248 } 00:35:05.248 ], 00:35:05.248 "driver_specific": {} 00:35:05.248 } 00:35:05.248 ] 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:05.248 "name": "Existed_Raid", 00:35:05.248 "uuid": "3357569f-1cff-41d3-b727-121229096b0e", 00:35:05.248 "strip_size_kb": 0, 00:35:05.248 "state": "configuring", 00:35:05.248 "raid_level": "raid1", 00:35:05.248 "superblock": true, 00:35:05.248 "num_base_bdevs": 3, 00:35:05.248 "num_base_bdevs_discovered": 2, 00:35:05.248 "num_base_bdevs_operational": 3, 00:35:05.248 "base_bdevs_list": [ 00:35:05.248 { 00:35:05.248 "name": "BaseBdev1", 00:35:05.248 "uuid": "7858f2cc-18d8-4a19-bd07-ce6e0a8dc056", 00:35:05.248 "is_configured": true, 00:35:05.248 "data_offset": 2048, 00:35:05.248 "data_size": 63488 00:35:05.248 }, 00:35:05.248 { 00:35:05.248 "name": "BaseBdev2", 00:35:05.248 "uuid": "63d8ce09-b7ba-42a2-a991-0c6d2ea56605", 00:35:05.248 "is_configured": true, 00:35:05.248 "data_offset": 2048, 00:35:05.248 "data_size": 63488 00:35:05.248 }, 00:35:05.248 { 00:35:05.248 "name": "BaseBdev3", 00:35:05.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:05.248 "is_configured": false, 00:35:05.248 "data_offset": 0, 00:35:05.248 "data_size": 0 00:35:05.248 } 00:35:05.248 ] 00:35:05.248 }' 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:05.248 17:32:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.507 [2024-11-26 17:32:06.193403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:05.507 [2024-11-26 17:32:06.193850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:05.507 [2024-11-26 17:32:06.193919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:05.507 BaseBdev3 00:35:05.507 [2024-11-26 17:32:06.194259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:05.507 [2024-11-26 17:32:06.194436] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:05.507 [2024-11-26 17:32:06.194447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:05.507 [2024-11-26 17:32:06.194638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.507 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.766 [ 00:35:05.766 { 00:35:05.766 "name": "BaseBdev3", 00:35:05.766 "aliases": [ 00:35:05.766 "20bfce3e-6849-416c-b12f-7e711d3ee477" 00:35:05.766 ], 00:35:05.766 "product_name": "Malloc disk", 00:35:05.766 "block_size": 512, 00:35:05.766 "num_blocks": 65536, 00:35:05.766 "uuid": "20bfce3e-6849-416c-b12f-7e711d3ee477", 00:35:05.766 "assigned_rate_limits": { 00:35:05.766 "rw_ios_per_sec": 0, 00:35:05.766 "rw_mbytes_per_sec": 0, 00:35:05.766 "r_mbytes_per_sec": 0, 00:35:05.766 "w_mbytes_per_sec": 0 00:35:05.766 }, 00:35:05.766 "claimed": true, 00:35:05.766 "claim_type": "exclusive_write", 00:35:05.766 "zoned": false, 00:35:05.766 "supported_io_types": { 00:35:05.766 "read": true, 00:35:05.766 "write": true, 00:35:05.766 "unmap": true, 00:35:05.766 "flush": true, 00:35:05.766 "reset": true, 00:35:05.766 "nvme_admin": false, 00:35:05.766 "nvme_io": false, 00:35:05.766 "nvme_io_md": false, 00:35:05.766 "write_zeroes": true, 00:35:05.766 "zcopy": true, 00:35:05.766 "get_zone_info": false, 00:35:05.766 "zone_management": false, 00:35:05.766 "zone_append": false, 00:35:05.766 "compare": false, 00:35:05.766 "compare_and_write": false, 00:35:05.766 "abort": true, 00:35:05.766 "seek_hole": false, 00:35:05.766 "seek_data": false, 00:35:05.766 "copy": true, 00:35:05.766 "nvme_iov_md": false 00:35:05.766 }, 00:35:05.766 "memory_domains": [ 00:35:05.766 { 00:35:05.766 "dma_device_id": "system", 00:35:05.766 "dma_device_type": 1 00:35:05.766 }, 00:35:05.766 { 00:35:05.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:05.766 "dma_device_type": 2 00:35:05.766 } 00:35:05.766 ], 00:35:05.766 "driver_specific": {} 00:35:05.766 } 00:35:05.766 ] 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:05.766 "name": "Existed_Raid", 00:35:05.766 "uuid": "3357569f-1cff-41d3-b727-121229096b0e", 00:35:05.766 "strip_size_kb": 0, 00:35:05.766 "state": "online", 00:35:05.766 "raid_level": "raid1", 00:35:05.766 "superblock": true, 00:35:05.766 "num_base_bdevs": 3, 00:35:05.766 "num_base_bdevs_discovered": 3, 00:35:05.766 "num_base_bdevs_operational": 3, 00:35:05.766 "base_bdevs_list": [ 00:35:05.766 { 00:35:05.766 "name": "BaseBdev1", 00:35:05.766 "uuid": "7858f2cc-18d8-4a19-bd07-ce6e0a8dc056", 00:35:05.766 "is_configured": true, 00:35:05.766 "data_offset": 2048, 00:35:05.766 "data_size": 63488 00:35:05.766 }, 00:35:05.766 { 00:35:05.766 "name": "BaseBdev2", 00:35:05.766 "uuid": "63d8ce09-b7ba-42a2-a991-0c6d2ea56605", 00:35:05.766 "is_configured": true, 00:35:05.766 "data_offset": 2048, 00:35:05.766 "data_size": 63488 00:35:05.766 }, 00:35:05.766 { 00:35:05.766 "name": "BaseBdev3", 00:35:05.766 "uuid": "20bfce3e-6849-416c-b12f-7e711d3ee477", 00:35:05.766 "is_configured": true, 00:35:05.766 "data_offset": 2048, 00:35:05.766 "data_size": 63488 00:35:05.766 } 00:35:05.766 ] 00:35:05.766 }' 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:05.766 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.025 [2024-11-26 17:32:06.676969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.025 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:06.025 "name": "Existed_Raid", 00:35:06.025 "aliases": [ 00:35:06.025 "3357569f-1cff-41d3-b727-121229096b0e" 00:35:06.025 ], 00:35:06.025 "product_name": "Raid Volume", 00:35:06.025 "block_size": 512, 00:35:06.026 "num_blocks": 63488, 00:35:06.026 "uuid": "3357569f-1cff-41d3-b727-121229096b0e", 00:35:06.026 "assigned_rate_limits": { 00:35:06.026 "rw_ios_per_sec": 0, 00:35:06.026 "rw_mbytes_per_sec": 0, 00:35:06.026 "r_mbytes_per_sec": 0, 00:35:06.026 "w_mbytes_per_sec": 0 00:35:06.026 }, 00:35:06.026 "claimed": false, 00:35:06.026 "zoned": false, 00:35:06.026 "supported_io_types": { 00:35:06.026 "read": true, 00:35:06.026 "write": true, 00:35:06.026 "unmap": false, 00:35:06.026 "flush": false, 00:35:06.026 "reset": true, 00:35:06.026 "nvme_admin": false, 00:35:06.026 "nvme_io": false, 00:35:06.026 "nvme_io_md": false, 00:35:06.026 "write_zeroes": true, 00:35:06.026 "zcopy": false, 00:35:06.026 "get_zone_info": false, 00:35:06.026 "zone_management": false, 00:35:06.026 "zone_append": false, 00:35:06.026 "compare": false, 00:35:06.026 "compare_and_write": false, 00:35:06.026 "abort": false, 00:35:06.026 "seek_hole": false, 00:35:06.026 "seek_data": false, 00:35:06.026 "copy": false, 00:35:06.026 "nvme_iov_md": false 00:35:06.026 }, 00:35:06.026 "memory_domains": [ 00:35:06.026 { 00:35:06.026 "dma_device_id": "system", 00:35:06.026 "dma_device_type": 1 00:35:06.026 }, 00:35:06.026 { 00:35:06.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.026 "dma_device_type": 2 00:35:06.026 }, 00:35:06.026 { 00:35:06.026 "dma_device_id": "system", 00:35:06.026 "dma_device_type": 1 00:35:06.026 }, 00:35:06.026 { 00:35:06.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.026 "dma_device_type": 2 00:35:06.026 }, 00:35:06.026 { 00:35:06.026 "dma_device_id": "system", 00:35:06.026 "dma_device_type": 1 00:35:06.026 }, 00:35:06.026 { 00:35:06.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.026 "dma_device_type": 2 00:35:06.026 } 00:35:06.026 ], 00:35:06.026 "driver_specific": { 00:35:06.026 "raid": { 00:35:06.026 "uuid": "3357569f-1cff-41d3-b727-121229096b0e", 00:35:06.026 "strip_size_kb": 0, 00:35:06.026 "state": "online", 00:35:06.026 "raid_level": "raid1", 00:35:06.026 "superblock": true, 00:35:06.026 "num_base_bdevs": 3, 00:35:06.026 "num_base_bdevs_discovered": 3, 00:35:06.026 "num_base_bdevs_operational": 3, 00:35:06.026 "base_bdevs_list": [ 00:35:06.026 { 00:35:06.026 "name": "BaseBdev1", 00:35:06.026 "uuid": "7858f2cc-18d8-4a19-bd07-ce6e0a8dc056", 00:35:06.026 "is_configured": true, 00:35:06.026 "data_offset": 2048, 00:35:06.026 "data_size": 63488 00:35:06.026 }, 00:35:06.026 { 00:35:06.026 "name": "BaseBdev2", 00:35:06.026 "uuid": "63d8ce09-b7ba-42a2-a991-0c6d2ea56605", 00:35:06.026 "is_configured": true, 00:35:06.026 "data_offset": 2048, 00:35:06.026 "data_size": 63488 00:35:06.026 }, 00:35:06.026 { 00:35:06.026 "name": "BaseBdev3", 00:35:06.026 "uuid": "20bfce3e-6849-416c-b12f-7e711d3ee477", 00:35:06.026 "is_configured": true, 00:35:06.026 "data_offset": 2048, 00:35:06.026 "data_size": 63488 00:35:06.026 } 00:35:06.026 ] 00:35:06.026 } 00:35:06.026 } 00:35:06.026 }' 00:35:06.026 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:06.286 BaseBdev2 00:35:06.286 BaseBdev3' 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.286 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.287 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.287 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:06.287 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:06.287 17:32:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:06.287 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.287 17:32:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.287 [2024-11-26 17:32:06.956221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:06.546 "name": "Existed_Raid", 00:35:06.546 "uuid": "3357569f-1cff-41d3-b727-121229096b0e", 00:35:06.546 "strip_size_kb": 0, 00:35:06.546 "state": "online", 00:35:06.546 "raid_level": "raid1", 00:35:06.546 "superblock": true, 00:35:06.546 "num_base_bdevs": 3, 00:35:06.546 "num_base_bdevs_discovered": 2, 00:35:06.546 "num_base_bdevs_operational": 2, 00:35:06.546 "base_bdevs_list": [ 00:35:06.546 { 00:35:06.546 "name": null, 00:35:06.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.546 "is_configured": false, 00:35:06.546 "data_offset": 0, 00:35:06.546 "data_size": 63488 00:35:06.546 }, 00:35:06.546 { 00:35:06.546 "name": "BaseBdev2", 00:35:06.546 "uuid": "63d8ce09-b7ba-42a2-a991-0c6d2ea56605", 00:35:06.546 "is_configured": true, 00:35:06.546 "data_offset": 2048, 00:35:06.546 "data_size": 63488 00:35:06.546 }, 00:35:06.546 { 00:35:06.546 "name": "BaseBdev3", 00:35:06.546 "uuid": "20bfce3e-6849-416c-b12f-7e711d3ee477", 00:35:06.546 "is_configured": true, 00:35:06.546 "data_offset": 2048, 00:35:06.546 "data_size": 63488 00:35:06.546 } 00:35:06.546 ] 00:35:06.546 }' 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:06.546 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.117 [2024-11-26 17:32:07.555717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.117 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.117 [2024-11-26 17:32:07.719113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:07.117 [2024-11-26 17:32:07.719227] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:07.376 [2024-11-26 17:32:07.820183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:07.376 [2024-11-26 17:32:07.820237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:07.376 [2024-11-26 17:32:07.820249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:35:07.376 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.377 BaseBdev2 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.377 [ 00:35:07.377 { 00:35:07.377 "name": "BaseBdev2", 00:35:07.377 "aliases": [ 00:35:07.377 "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86" 00:35:07.377 ], 00:35:07.377 "product_name": "Malloc disk", 00:35:07.377 "block_size": 512, 00:35:07.377 "num_blocks": 65536, 00:35:07.377 "uuid": "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86", 00:35:07.377 "assigned_rate_limits": { 00:35:07.377 "rw_ios_per_sec": 0, 00:35:07.377 "rw_mbytes_per_sec": 0, 00:35:07.377 "r_mbytes_per_sec": 0, 00:35:07.377 "w_mbytes_per_sec": 0 00:35:07.377 }, 00:35:07.377 "claimed": false, 00:35:07.377 "zoned": false, 00:35:07.377 "supported_io_types": { 00:35:07.377 "read": true, 00:35:07.377 "write": true, 00:35:07.377 "unmap": true, 00:35:07.377 "flush": true, 00:35:07.377 "reset": true, 00:35:07.377 "nvme_admin": false, 00:35:07.377 "nvme_io": false, 00:35:07.377 "nvme_io_md": false, 00:35:07.377 "write_zeroes": true, 00:35:07.377 "zcopy": true, 00:35:07.377 "get_zone_info": false, 00:35:07.377 "zone_management": false, 00:35:07.377 "zone_append": false, 00:35:07.377 "compare": false, 00:35:07.377 "compare_and_write": false, 00:35:07.377 "abort": true, 00:35:07.377 "seek_hole": false, 00:35:07.377 "seek_data": false, 00:35:07.377 "copy": true, 00:35:07.377 "nvme_iov_md": false 00:35:07.377 }, 00:35:07.377 "memory_domains": [ 00:35:07.377 { 00:35:07.377 "dma_device_id": "system", 00:35:07.377 "dma_device_type": 1 00:35:07.377 }, 00:35:07.377 { 00:35:07.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:07.377 "dma_device_type": 2 00:35:07.377 } 00:35:07.377 ], 00:35:07.377 "driver_specific": {} 00:35:07.377 } 00:35:07.377 ] 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.377 BaseBdev3 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.377 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.377 [ 00:35:07.377 { 00:35:07.377 "name": "BaseBdev3", 00:35:07.377 "aliases": [ 00:35:07.377 "b8c5cf0c-8996-4c1b-b396-9ae7438ab644" 00:35:07.377 ], 00:35:07.377 "product_name": "Malloc disk", 00:35:07.377 "block_size": 512, 00:35:07.377 "num_blocks": 65536, 00:35:07.377 "uuid": "b8c5cf0c-8996-4c1b-b396-9ae7438ab644", 00:35:07.377 "assigned_rate_limits": { 00:35:07.377 "rw_ios_per_sec": 0, 00:35:07.377 "rw_mbytes_per_sec": 0, 00:35:07.377 "r_mbytes_per_sec": 0, 00:35:07.377 "w_mbytes_per_sec": 0 00:35:07.377 }, 00:35:07.377 "claimed": false, 00:35:07.377 "zoned": false, 00:35:07.377 "supported_io_types": { 00:35:07.377 "read": true, 00:35:07.377 "write": true, 00:35:07.377 "unmap": true, 00:35:07.377 "flush": true, 00:35:07.377 "reset": true, 00:35:07.377 "nvme_admin": false, 00:35:07.377 "nvme_io": false, 00:35:07.377 "nvme_io_md": false, 00:35:07.377 "write_zeroes": true, 00:35:07.377 "zcopy": true, 00:35:07.377 "get_zone_info": false, 00:35:07.377 "zone_management": false, 00:35:07.377 "zone_append": false, 00:35:07.377 "compare": false, 00:35:07.377 "compare_and_write": false, 00:35:07.377 "abort": true, 00:35:07.377 "seek_hole": false, 00:35:07.377 "seek_data": false, 00:35:07.377 "copy": true, 00:35:07.377 "nvme_iov_md": false 00:35:07.377 }, 00:35:07.377 "memory_domains": [ 00:35:07.377 { 00:35:07.377 "dma_device_id": "system", 00:35:07.377 "dma_device_type": 1 00:35:07.377 }, 00:35:07.377 { 00:35:07.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:07.377 "dma_device_type": 2 00:35:07.377 } 00:35:07.377 ], 00:35:07.377 "driver_specific": {} 00:35:07.377 } 00:35:07.377 ] 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.377 [2024-11-26 17:32:08.021420] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:07.377 [2024-11-26 17:32:08.021520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:07.377 [2024-11-26 17:32:08.021561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:07.377 [2024-11-26 17:32:08.023322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:07.377 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.637 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:07.637 "name": "Existed_Raid", 00:35:07.637 "uuid": "7ae99b0d-55de-4f96-8879-e0cf2c7badf7", 00:35:07.637 "strip_size_kb": 0, 00:35:07.637 "state": "configuring", 00:35:07.637 "raid_level": "raid1", 00:35:07.637 "superblock": true, 00:35:07.637 "num_base_bdevs": 3, 00:35:07.637 "num_base_bdevs_discovered": 2, 00:35:07.637 "num_base_bdevs_operational": 3, 00:35:07.637 "base_bdevs_list": [ 00:35:07.637 { 00:35:07.637 "name": "BaseBdev1", 00:35:07.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.637 "is_configured": false, 00:35:07.637 "data_offset": 0, 00:35:07.637 "data_size": 0 00:35:07.637 }, 00:35:07.637 { 00:35:07.637 "name": "BaseBdev2", 00:35:07.637 "uuid": "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86", 00:35:07.637 "is_configured": true, 00:35:07.637 "data_offset": 2048, 00:35:07.637 "data_size": 63488 00:35:07.637 }, 00:35:07.637 { 00:35:07.637 "name": "BaseBdev3", 00:35:07.637 "uuid": "b8c5cf0c-8996-4c1b-b396-9ae7438ab644", 00:35:07.637 "is_configured": true, 00:35:07.637 "data_offset": 2048, 00:35:07.637 "data_size": 63488 00:35:07.637 } 00:35:07.637 ] 00:35:07.637 }' 00:35:07.637 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:07.637 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.896 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:07.896 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.896 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.896 [2024-11-26 17:32:08.492667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:07.896 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:07.897 "name": "Existed_Raid", 00:35:07.897 "uuid": "7ae99b0d-55de-4f96-8879-e0cf2c7badf7", 00:35:07.897 "strip_size_kb": 0, 00:35:07.897 "state": "configuring", 00:35:07.897 "raid_level": "raid1", 00:35:07.897 "superblock": true, 00:35:07.897 "num_base_bdevs": 3, 00:35:07.897 "num_base_bdevs_discovered": 1, 00:35:07.897 "num_base_bdevs_operational": 3, 00:35:07.897 "base_bdevs_list": [ 00:35:07.897 { 00:35:07.897 "name": "BaseBdev1", 00:35:07.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.897 "is_configured": false, 00:35:07.897 "data_offset": 0, 00:35:07.897 "data_size": 0 00:35:07.897 }, 00:35:07.897 { 00:35:07.897 "name": null, 00:35:07.897 "uuid": "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86", 00:35:07.897 "is_configured": false, 00:35:07.897 "data_offset": 0, 00:35:07.897 "data_size": 63488 00:35:07.897 }, 00:35:07.897 { 00:35:07.897 "name": "BaseBdev3", 00:35:07.897 "uuid": "b8c5cf0c-8996-4c1b-b396-9ae7438ab644", 00:35:07.897 "is_configured": true, 00:35:07.897 "data_offset": 2048, 00:35:07.897 "data_size": 63488 00:35:07.897 } 00:35:07.897 ] 00:35:07.897 }' 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:07.897 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.465 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.465 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.465 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.465 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:08.466 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.466 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:35:08.466 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:08.466 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.466 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.466 [2024-11-26 17:32:09.023572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:08.466 BaseBdev1 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.466 [ 00:35:08.466 { 00:35:08.466 "name": "BaseBdev1", 00:35:08.466 "aliases": [ 00:35:08.466 "dce25147-dba3-4193-b513-bcc58b3cfd85" 00:35:08.466 ], 00:35:08.466 "product_name": "Malloc disk", 00:35:08.466 "block_size": 512, 00:35:08.466 "num_blocks": 65536, 00:35:08.466 "uuid": "dce25147-dba3-4193-b513-bcc58b3cfd85", 00:35:08.466 "assigned_rate_limits": { 00:35:08.466 "rw_ios_per_sec": 0, 00:35:08.466 "rw_mbytes_per_sec": 0, 00:35:08.466 "r_mbytes_per_sec": 0, 00:35:08.466 "w_mbytes_per_sec": 0 00:35:08.466 }, 00:35:08.466 "claimed": true, 00:35:08.466 "claim_type": "exclusive_write", 00:35:08.466 "zoned": false, 00:35:08.466 "supported_io_types": { 00:35:08.466 "read": true, 00:35:08.466 "write": true, 00:35:08.466 "unmap": true, 00:35:08.466 "flush": true, 00:35:08.466 "reset": true, 00:35:08.466 "nvme_admin": false, 00:35:08.466 "nvme_io": false, 00:35:08.466 "nvme_io_md": false, 00:35:08.466 "write_zeroes": true, 00:35:08.466 "zcopy": true, 00:35:08.466 "get_zone_info": false, 00:35:08.466 "zone_management": false, 00:35:08.466 "zone_append": false, 00:35:08.466 "compare": false, 00:35:08.466 "compare_and_write": false, 00:35:08.466 "abort": true, 00:35:08.466 "seek_hole": false, 00:35:08.466 "seek_data": false, 00:35:08.466 "copy": true, 00:35:08.466 "nvme_iov_md": false 00:35:08.466 }, 00:35:08.466 "memory_domains": [ 00:35:08.466 { 00:35:08.466 "dma_device_id": "system", 00:35:08.466 "dma_device_type": 1 00:35:08.466 }, 00:35:08.466 { 00:35:08.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:08.466 "dma_device_type": 2 00:35:08.466 } 00:35:08.466 ], 00:35:08.466 "driver_specific": {} 00:35:08.466 } 00:35:08.466 ] 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:08.466 "name": "Existed_Raid", 00:35:08.466 "uuid": "7ae99b0d-55de-4f96-8879-e0cf2c7badf7", 00:35:08.466 "strip_size_kb": 0, 00:35:08.466 "state": "configuring", 00:35:08.466 "raid_level": "raid1", 00:35:08.466 "superblock": true, 00:35:08.466 "num_base_bdevs": 3, 00:35:08.466 "num_base_bdevs_discovered": 2, 00:35:08.466 "num_base_bdevs_operational": 3, 00:35:08.466 "base_bdevs_list": [ 00:35:08.466 { 00:35:08.466 "name": "BaseBdev1", 00:35:08.466 "uuid": "dce25147-dba3-4193-b513-bcc58b3cfd85", 00:35:08.466 "is_configured": true, 00:35:08.466 "data_offset": 2048, 00:35:08.466 "data_size": 63488 00:35:08.466 }, 00:35:08.466 { 00:35:08.466 "name": null, 00:35:08.466 "uuid": "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86", 00:35:08.466 "is_configured": false, 00:35:08.466 "data_offset": 0, 00:35:08.466 "data_size": 63488 00:35:08.466 }, 00:35:08.466 { 00:35:08.466 "name": "BaseBdev3", 00:35:08.466 "uuid": "b8c5cf0c-8996-4c1b-b396-9ae7438ab644", 00:35:08.466 "is_configured": true, 00:35:08.466 "data_offset": 2048, 00:35:08.466 "data_size": 63488 00:35:08.466 } 00:35:08.466 ] 00:35:08.466 }' 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:08.466 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.067 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.067 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.067 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.067 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:09.067 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.067 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:35:09.067 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:35:09.067 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.068 [2024-11-26 17:32:09.522760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:09.068 "name": "Existed_Raid", 00:35:09.068 "uuid": "7ae99b0d-55de-4f96-8879-e0cf2c7badf7", 00:35:09.068 "strip_size_kb": 0, 00:35:09.068 "state": "configuring", 00:35:09.068 "raid_level": "raid1", 00:35:09.068 "superblock": true, 00:35:09.068 "num_base_bdevs": 3, 00:35:09.068 "num_base_bdevs_discovered": 1, 00:35:09.068 "num_base_bdevs_operational": 3, 00:35:09.068 "base_bdevs_list": [ 00:35:09.068 { 00:35:09.068 "name": "BaseBdev1", 00:35:09.068 "uuid": "dce25147-dba3-4193-b513-bcc58b3cfd85", 00:35:09.068 "is_configured": true, 00:35:09.068 "data_offset": 2048, 00:35:09.068 "data_size": 63488 00:35:09.068 }, 00:35:09.068 { 00:35:09.068 "name": null, 00:35:09.068 "uuid": "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86", 00:35:09.068 "is_configured": false, 00:35:09.068 "data_offset": 0, 00:35:09.068 "data_size": 63488 00:35:09.068 }, 00:35:09.068 { 00:35:09.068 "name": null, 00:35:09.068 "uuid": "b8c5cf0c-8996-4c1b-b396-9ae7438ab644", 00:35:09.068 "is_configured": false, 00:35:09.068 "data_offset": 0, 00:35:09.068 "data_size": 63488 00:35:09.068 } 00:35:09.068 ] 00:35:09.068 }' 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:09.068 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.327 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.327 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.327 17:32:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.327 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:09.327 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.587 [2024-11-26 17:32:10.045925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:09.587 "name": "Existed_Raid", 00:35:09.587 "uuid": "7ae99b0d-55de-4f96-8879-e0cf2c7badf7", 00:35:09.587 "strip_size_kb": 0, 00:35:09.587 "state": "configuring", 00:35:09.587 "raid_level": "raid1", 00:35:09.587 "superblock": true, 00:35:09.587 "num_base_bdevs": 3, 00:35:09.587 "num_base_bdevs_discovered": 2, 00:35:09.587 "num_base_bdevs_operational": 3, 00:35:09.587 "base_bdevs_list": [ 00:35:09.587 { 00:35:09.587 "name": "BaseBdev1", 00:35:09.587 "uuid": "dce25147-dba3-4193-b513-bcc58b3cfd85", 00:35:09.587 "is_configured": true, 00:35:09.587 "data_offset": 2048, 00:35:09.587 "data_size": 63488 00:35:09.587 }, 00:35:09.587 { 00:35:09.587 "name": null, 00:35:09.587 "uuid": "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86", 00:35:09.587 "is_configured": false, 00:35:09.587 "data_offset": 0, 00:35:09.587 "data_size": 63488 00:35:09.587 }, 00:35:09.587 { 00:35:09.587 "name": "BaseBdev3", 00:35:09.587 "uuid": "b8c5cf0c-8996-4c1b-b396-9ae7438ab644", 00:35:09.587 "is_configured": true, 00:35:09.587 "data_offset": 2048, 00:35:09.587 "data_size": 63488 00:35:09.587 } 00:35:09.587 ] 00:35:09.587 }' 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:09.587 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.847 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.847 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:09.847 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.847 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.847 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.847 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:35:09.847 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:09.847 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.847 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.106 [2024-11-26 17:32:10.545069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:10.106 "name": "Existed_Raid", 00:35:10.106 "uuid": "7ae99b0d-55de-4f96-8879-e0cf2c7badf7", 00:35:10.106 "strip_size_kb": 0, 00:35:10.106 "state": "configuring", 00:35:10.106 "raid_level": "raid1", 00:35:10.106 "superblock": true, 00:35:10.106 "num_base_bdevs": 3, 00:35:10.106 "num_base_bdevs_discovered": 1, 00:35:10.106 "num_base_bdevs_operational": 3, 00:35:10.106 "base_bdevs_list": [ 00:35:10.106 { 00:35:10.106 "name": null, 00:35:10.106 "uuid": "dce25147-dba3-4193-b513-bcc58b3cfd85", 00:35:10.106 "is_configured": false, 00:35:10.106 "data_offset": 0, 00:35:10.106 "data_size": 63488 00:35:10.106 }, 00:35:10.106 { 00:35:10.106 "name": null, 00:35:10.106 "uuid": "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86", 00:35:10.106 "is_configured": false, 00:35:10.106 "data_offset": 0, 00:35:10.106 "data_size": 63488 00:35:10.106 }, 00:35:10.106 { 00:35:10.106 "name": "BaseBdev3", 00:35:10.106 "uuid": "b8c5cf0c-8996-4c1b-b396-9ae7438ab644", 00:35:10.106 "is_configured": true, 00:35:10.106 "data_offset": 2048, 00:35:10.106 "data_size": 63488 00:35:10.106 } 00:35:10.106 ] 00:35:10.106 }' 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:10.106 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.675 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.675 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.675 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.675 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.676 [2024-11-26 17:32:11.139259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:10.676 "name": "Existed_Raid", 00:35:10.676 "uuid": "7ae99b0d-55de-4f96-8879-e0cf2c7badf7", 00:35:10.676 "strip_size_kb": 0, 00:35:10.676 "state": "configuring", 00:35:10.676 "raid_level": "raid1", 00:35:10.676 "superblock": true, 00:35:10.676 "num_base_bdevs": 3, 00:35:10.676 "num_base_bdevs_discovered": 2, 00:35:10.676 "num_base_bdevs_operational": 3, 00:35:10.676 "base_bdevs_list": [ 00:35:10.676 { 00:35:10.676 "name": null, 00:35:10.676 "uuid": "dce25147-dba3-4193-b513-bcc58b3cfd85", 00:35:10.676 "is_configured": false, 00:35:10.676 "data_offset": 0, 00:35:10.676 "data_size": 63488 00:35:10.676 }, 00:35:10.676 { 00:35:10.676 "name": "BaseBdev2", 00:35:10.676 "uuid": "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86", 00:35:10.676 "is_configured": true, 00:35:10.676 "data_offset": 2048, 00:35:10.676 "data_size": 63488 00:35:10.676 }, 00:35:10.676 { 00:35:10.676 "name": "BaseBdev3", 00:35:10.676 "uuid": "b8c5cf0c-8996-4c1b-b396-9ae7438ab644", 00:35:10.676 "is_configured": true, 00:35:10.676 "data_offset": 2048, 00:35:10.676 "data_size": 63488 00:35:10.676 } 00:35:10.676 ] 00:35:10.676 }' 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:10.676 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.935 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.936 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:10.936 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.936 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dce25147-dba3-4193-b513-bcc58b3cfd85 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.195 [2024-11-26 17:32:11.758115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:11.195 [2024-11-26 17:32:11.758337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:11.195 [2024-11-26 17:32:11.758350] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:11.195 [2024-11-26 17:32:11.758653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:11.195 NewBaseBdev 00:35:11.195 [2024-11-26 17:32:11.758815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:11.195 [2024-11-26 17:32:11.758834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:35:11.195 [2024-11-26 17:32:11.758983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.195 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.195 [ 00:35:11.195 { 00:35:11.195 "name": "NewBaseBdev", 00:35:11.195 "aliases": [ 00:35:11.195 "dce25147-dba3-4193-b513-bcc58b3cfd85" 00:35:11.195 ], 00:35:11.195 "product_name": "Malloc disk", 00:35:11.195 "block_size": 512, 00:35:11.195 "num_blocks": 65536, 00:35:11.195 "uuid": "dce25147-dba3-4193-b513-bcc58b3cfd85", 00:35:11.195 "assigned_rate_limits": { 00:35:11.195 "rw_ios_per_sec": 0, 00:35:11.195 "rw_mbytes_per_sec": 0, 00:35:11.195 "r_mbytes_per_sec": 0, 00:35:11.196 "w_mbytes_per_sec": 0 00:35:11.196 }, 00:35:11.196 "claimed": true, 00:35:11.196 "claim_type": "exclusive_write", 00:35:11.196 "zoned": false, 00:35:11.196 "supported_io_types": { 00:35:11.196 "read": true, 00:35:11.196 "write": true, 00:35:11.196 "unmap": true, 00:35:11.196 "flush": true, 00:35:11.196 "reset": true, 00:35:11.196 "nvme_admin": false, 00:35:11.196 "nvme_io": false, 00:35:11.196 "nvme_io_md": false, 00:35:11.196 "write_zeroes": true, 00:35:11.196 "zcopy": true, 00:35:11.196 "get_zone_info": false, 00:35:11.196 "zone_management": false, 00:35:11.196 "zone_append": false, 00:35:11.196 "compare": false, 00:35:11.196 "compare_and_write": false, 00:35:11.196 "abort": true, 00:35:11.196 "seek_hole": false, 00:35:11.196 "seek_data": false, 00:35:11.196 "copy": true, 00:35:11.196 "nvme_iov_md": false 00:35:11.196 }, 00:35:11.196 "memory_domains": [ 00:35:11.196 { 00:35:11.196 "dma_device_id": "system", 00:35:11.196 "dma_device_type": 1 00:35:11.196 }, 00:35:11.196 { 00:35:11.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.196 "dma_device_type": 2 00:35:11.196 } 00:35:11.196 ], 00:35:11.196 "driver_specific": {} 00:35:11.196 } 00:35:11.196 ] 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:11.196 "name": "Existed_Raid", 00:35:11.196 "uuid": "7ae99b0d-55de-4f96-8879-e0cf2c7badf7", 00:35:11.196 "strip_size_kb": 0, 00:35:11.196 "state": "online", 00:35:11.196 "raid_level": "raid1", 00:35:11.196 "superblock": true, 00:35:11.196 "num_base_bdevs": 3, 00:35:11.196 "num_base_bdevs_discovered": 3, 00:35:11.196 "num_base_bdevs_operational": 3, 00:35:11.196 "base_bdevs_list": [ 00:35:11.196 { 00:35:11.196 "name": "NewBaseBdev", 00:35:11.196 "uuid": "dce25147-dba3-4193-b513-bcc58b3cfd85", 00:35:11.196 "is_configured": true, 00:35:11.196 "data_offset": 2048, 00:35:11.196 "data_size": 63488 00:35:11.196 }, 00:35:11.196 { 00:35:11.196 "name": "BaseBdev2", 00:35:11.196 "uuid": "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86", 00:35:11.196 "is_configured": true, 00:35:11.196 "data_offset": 2048, 00:35:11.196 "data_size": 63488 00:35:11.196 }, 00:35:11.196 { 00:35:11.196 "name": "BaseBdev3", 00:35:11.196 "uuid": "b8c5cf0c-8996-4c1b-b396-9ae7438ab644", 00:35:11.196 "is_configured": true, 00:35:11.196 "data_offset": 2048, 00:35:11.196 "data_size": 63488 00:35:11.196 } 00:35:11.196 ] 00:35:11.196 }' 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:11.196 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:11.765 [2024-11-26 17:32:12.233771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:11.765 "name": "Existed_Raid", 00:35:11.765 "aliases": [ 00:35:11.765 "7ae99b0d-55de-4f96-8879-e0cf2c7badf7" 00:35:11.765 ], 00:35:11.765 "product_name": "Raid Volume", 00:35:11.765 "block_size": 512, 00:35:11.765 "num_blocks": 63488, 00:35:11.765 "uuid": "7ae99b0d-55de-4f96-8879-e0cf2c7badf7", 00:35:11.765 "assigned_rate_limits": { 00:35:11.765 "rw_ios_per_sec": 0, 00:35:11.765 "rw_mbytes_per_sec": 0, 00:35:11.765 "r_mbytes_per_sec": 0, 00:35:11.765 "w_mbytes_per_sec": 0 00:35:11.765 }, 00:35:11.765 "claimed": false, 00:35:11.765 "zoned": false, 00:35:11.765 "supported_io_types": { 00:35:11.765 "read": true, 00:35:11.765 "write": true, 00:35:11.765 "unmap": false, 00:35:11.765 "flush": false, 00:35:11.765 "reset": true, 00:35:11.765 "nvme_admin": false, 00:35:11.765 "nvme_io": false, 00:35:11.765 "nvme_io_md": false, 00:35:11.765 "write_zeroes": true, 00:35:11.765 "zcopy": false, 00:35:11.765 "get_zone_info": false, 00:35:11.765 "zone_management": false, 00:35:11.765 "zone_append": false, 00:35:11.765 "compare": false, 00:35:11.765 "compare_and_write": false, 00:35:11.765 "abort": false, 00:35:11.765 "seek_hole": false, 00:35:11.765 "seek_data": false, 00:35:11.765 "copy": false, 00:35:11.765 "nvme_iov_md": false 00:35:11.765 }, 00:35:11.765 "memory_domains": [ 00:35:11.765 { 00:35:11.765 "dma_device_id": "system", 00:35:11.765 "dma_device_type": 1 00:35:11.765 }, 00:35:11.765 { 00:35:11.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.765 "dma_device_type": 2 00:35:11.765 }, 00:35:11.765 { 00:35:11.765 "dma_device_id": "system", 00:35:11.765 "dma_device_type": 1 00:35:11.765 }, 00:35:11.765 { 00:35:11.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.765 "dma_device_type": 2 00:35:11.765 }, 00:35:11.765 { 00:35:11.765 "dma_device_id": "system", 00:35:11.765 "dma_device_type": 1 00:35:11.765 }, 00:35:11.765 { 00:35:11.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.765 "dma_device_type": 2 00:35:11.765 } 00:35:11.765 ], 00:35:11.765 "driver_specific": { 00:35:11.765 "raid": { 00:35:11.765 "uuid": "7ae99b0d-55de-4f96-8879-e0cf2c7badf7", 00:35:11.765 "strip_size_kb": 0, 00:35:11.765 "state": "online", 00:35:11.765 "raid_level": "raid1", 00:35:11.765 "superblock": true, 00:35:11.765 "num_base_bdevs": 3, 00:35:11.765 "num_base_bdevs_discovered": 3, 00:35:11.765 "num_base_bdevs_operational": 3, 00:35:11.765 "base_bdevs_list": [ 00:35:11.765 { 00:35:11.765 "name": "NewBaseBdev", 00:35:11.765 "uuid": "dce25147-dba3-4193-b513-bcc58b3cfd85", 00:35:11.765 "is_configured": true, 00:35:11.765 "data_offset": 2048, 00:35:11.765 "data_size": 63488 00:35:11.765 }, 00:35:11.765 { 00:35:11.765 "name": "BaseBdev2", 00:35:11.765 "uuid": "e81bb7f6-b2e3-42e4-a42d-87d1bbd62e86", 00:35:11.765 "is_configured": true, 00:35:11.765 "data_offset": 2048, 00:35:11.765 "data_size": 63488 00:35:11.765 }, 00:35:11.765 { 00:35:11.765 "name": "BaseBdev3", 00:35:11.765 "uuid": "b8c5cf0c-8996-4c1b-b396-9ae7438ab644", 00:35:11.765 "is_configured": true, 00:35:11.765 "data_offset": 2048, 00:35:11.765 "data_size": 63488 00:35:11.765 } 00:35:11.765 ] 00:35:11.765 } 00:35:11.765 } 00:35:11.765 }' 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:35:11.765 BaseBdev2 00:35:11.765 BaseBdev3' 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:11.765 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.025 [2024-11-26 17:32:12.469025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:12.025 [2024-11-26 17:32:12.469060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:12.025 [2024-11-26 17:32:12.469159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:12.025 [2024-11-26 17:32:12.469447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:12.025 [2024-11-26 17:32:12.469456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68266 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68266 ']' 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68266 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68266 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68266' 00:35:12.025 killing process with pid 68266 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68266 00:35:12.025 [2024-11-26 17:32:12.516852] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:12.025 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68266 00:35:12.285 [2024-11-26 17:32:12.817748] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:13.664 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:35:13.664 00:35:13.664 real 0m10.875s 00:35:13.664 user 0m17.279s 00:35:13.664 sys 0m1.818s 00:35:13.664 17:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.664 17:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.664 ************************************ 00:35:13.664 END TEST raid_state_function_test_sb 00:35:13.664 ************************************ 00:35:13.664 17:32:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:35:13.664 17:32:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:13.664 17:32:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.664 17:32:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:13.664 ************************************ 00:35:13.664 START TEST raid_superblock_test 00:35:13.664 ************************************ 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68892 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68892 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68892 ']' 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.664 17:32:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:13.664 [2024-11-26 17:32:14.201935] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:13.664 [2024-11-26 17:32:14.202188] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68892 ] 00:35:13.924 [2024-11-26 17:32:14.385982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.924 [2024-11-26 17:32:14.506439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.183 [2024-11-26 17:32:14.716280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:14.183 [2024-11-26 17:32:14.716444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:14.443 malloc1 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:14.443 [2024-11-26 17:32:15.116863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:14.443 [2024-11-26 17:32:15.116972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:14.443 [2024-11-26 17:32:15.117016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:14.443 [2024-11-26 17:32:15.117025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:14.443 [2024-11-26 17:32:15.119157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:14.443 [2024-11-26 17:32:15.119208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:14.443 pt1 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.443 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:14.703 malloc2 00:35:14.703 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.703 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:14.703 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.703 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:14.704 [2024-11-26 17:32:15.169267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:14.704 [2024-11-26 17:32:15.169372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:14.704 [2024-11-26 17:32:15.169433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:14.704 [2024-11-26 17:32:15.169466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:14.704 [2024-11-26 17:32:15.171630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:14.704 [2024-11-26 17:32:15.171701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:14.704 pt2 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:14.704 malloc3 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:14.704 [2024-11-26 17:32:15.240997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:14.704 [2024-11-26 17:32:15.241097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:14.704 [2024-11-26 17:32:15.241137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:14.704 [2024-11-26 17:32:15.241170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:14.704 [2024-11-26 17:32:15.243369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:14.704 [2024-11-26 17:32:15.243441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:14.704 pt3 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:14.704 [2024-11-26 17:32:15.253033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:14.704 [2024-11-26 17:32:15.254842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:14.704 [2024-11-26 17:32:15.254911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:14.704 [2024-11-26 17:32:15.255067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:14.704 [2024-11-26 17:32:15.255084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:14.704 [2024-11-26 17:32:15.255330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:14.704 [2024-11-26 17:32:15.255486] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:14.704 [2024-11-26 17:32:15.255498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:14.704 [2024-11-26 17:32:15.255689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:14.704 "name": "raid_bdev1", 00:35:14.704 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:14.704 "strip_size_kb": 0, 00:35:14.704 "state": "online", 00:35:14.704 "raid_level": "raid1", 00:35:14.704 "superblock": true, 00:35:14.704 "num_base_bdevs": 3, 00:35:14.704 "num_base_bdevs_discovered": 3, 00:35:14.704 "num_base_bdevs_operational": 3, 00:35:14.704 "base_bdevs_list": [ 00:35:14.704 { 00:35:14.704 "name": "pt1", 00:35:14.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:14.704 "is_configured": true, 00:35:14.704 "data_offset": 2048, 00:35:14.704 "data_size": 63488 00:35:14.704 }, 00:35:14.704 { 00:35:14.704 "name": "pt2", 00:35:14.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:14.704 "is_configured": true, 00:35:14.704 "data_offset": 2048, 00:35:14.704 "data_size": 63488 00:35:14.704 }, 00:35:14.704 { 00:35:14.704 "name": "pt3", 00:35:14.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:14.704 "is_configured": true, 00:35:14.704 "data_offset": 2048, 00:35:14.704 "data_size": 63488 00:35:14.704 } 00:35:14.704 ] 00:35:14.704 }' 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:14.704 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.274 [2024-11-26 17:32:15.732638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:15.274 "name": "raid_bdev1", 00:35:15.274 "aliases": [ 00:35:15.274 "d1c5143f-20f6-43c9-ae40-43a5e3976529" 00:35:15.274 ], 00:35:15.274 "product_name": "Raid Volume", 00:35:15.274 "block_size": 512, 00:35:15.274 "num_blocks": 63488, 00:35:15.274 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:15.274 "assigned_rate_limits": { 00:35:15.274 "rw_ios_per_sec": 0, 00:35:15.274 "rw_mbytes_per_sec": 0, 00:35:15.274 "r_mbytes_per_sec": 0, 00:35:15.274 "w_mbytes_per_sec": 0 00:35:15.274 }, 00:35:15.274 "claimed": false, 00:35:15.274 "zoned": false, 00:35:15.274 "supported_io_types": { 00:35:15.274 "read": true, 00:35:15.274 "write": true, 00:35:15.274 "unmap": false, 00:35:15.274 "flush": false, 00:35:15.274 "reset": true, 00:35:15.274 "nvme_admin": false, 00:35:15.274 "nvme_io": false, 00:35:15.274 "nvme_io_md": false, 00:35:15.274 "write_zeroes": true, 00:35:15.274 "zcopy": false, 00:35:15.274 "get_zone_info": false, 00:35:15.274 "zone_management": false, 00:35:15.274 "zone_append": false, 00:35:15.274 "compare": false, 00:35:15.274 "compare_and_write": false, 00:35:15.274 "abort": false, 00:35:15.274 "seek_hole": false, 00:35:15.274 "seek_data": false, 00:35:15.274 "copy": false, 00:35:15.274 "nvme_iov_md": false 00:35:15.274 }, 00:35:15.274 "memory_domains": [ 00:35:15.274 { 00:35:15.274 "dma_device_id": "system", 00:35:15.274 "dma_device_type": 1 00:35:15.274 }, 00:35:15.274 { 00:35:15.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:15.274 "dma_device_type": 2 00:35:15.274 }, 00:35:15.274 { 00:35:15.274 "dma_device_id": "system", 00:35:15.274 "dma_device_type": 1 00:35:15.274 }, 00:35:15.274 { 00:35:15.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:15.274 "dma_device_type": 2 00:35:15.274 }, 00:35:15.274 { 00:35:15.274 "dma_device_id": "system", 00:35:15.274 "dma_device_type": 1 00:35:15.274 }, 00:35:15.274 { 00:35:15.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:15.274 "dma_device_type": 2 00:35:15.274 } 00:35:15.274 ], 00:35:15.274 "driver_specific": { 00:35:15.274 "raid": { 00:35:15.274 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:15.274 "strip_size_kb": 0, 00:35:15.274 "state": "online", 00:35:15.274 "raid_level": "raid1", 00:35:15.274 "superblock": true, 00:35:15.274 "num_base_bdevs": 3, 00:35:15.274 "num_base_bdevs_discovered": 3, 00:35:15.274 "num_base_bdevs_operational": 3, 00:35:15.274 "base_bdevs_list": [ 00:35:15.274 { 00:35:15.274 "name": "pt1", 00:35:15.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:15.274 "is_configured": true, 00:35:15.274 "data_offset": 2048, 00:35:15.274 "data_size": 63488 00:35:15.274 }, 00:35:15.274 { 00:35:15.274 "name": "pt2", 00:35:15.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:15.274 "is_configured": true, 00:35:15.274 "data_offset": 2048, 00:35:15.274 "data_size": 63488 00:35:15.274 }, 00:35:15.274 { 00:35:15.274 "name": "pt3", 00:35:15.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:15.274 "is_configured": true, 00:35:15.274 "data_offset": 2048, 00:35:15.274 "data_size": 63488 00:35:15.274 } 00:35:15.274 ] 00:35:15.274 } 00:35:15.274 } 00:35:15.274 }' 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:15.274 pt2 00:35:15.274 pt3' 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.274 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.535 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:15.535 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:15.535 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:15.535 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:15.535 17:32:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:35:15.535 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.535 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.535 17:32:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.535 [2024-11-26 17:32:16.032078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d1c5143f-20f6-43c9-ae40-43a5e3976529 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d1c5143f-20f6-43c9-ae40-43a5e3976529 ']' 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.535 [2024-11-26 17:32:16.063668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:15.535 [2024-11-26 17:32:16.063697] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:15.535 [2024-11-26 17:32:16.063777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:15.535 [2024-11-26 17:32:16.063880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:15.535 [2024-11-26 17:32:16.063891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:15.535 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.536 [2024-11-26 17:32:16.203471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:15.536 [2024-11-26 17:32:16.205466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:15.536 [2024-11-26 17:32:16.205529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:35:15.536 [2024-11-26 17:32:16.205654] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:15.536 [2024-11-26 17:32:16.205744] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:15.536 [2024-11-26 17:32:16.205829] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:35:15.536 [2024-11-26 17:32:16.205894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:15.536 [2024-11-26 17:32:16.205927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:35:15.536 request: 00:35:15.536 { 00:35:15.536 "name": "raid_bdev1", 00:35:15.536 "raid_level": "raid1", 00:35:15.536 "base_bdevs": [ 00:35:15.536 "malloc1", 00:35:15.536 "malloc2", 00:35:15.536 "malloc3" 00:35:15.536 ], 00:35:15.536 "superblock": false, 00:35:15.536 "method": "bdev_raid_create", 00:35:15.536 "req_id": 1 00:35:15.536 } 00:35:15.536 Got JSON-RPC error response 00:35:15.536 response: 00:35:15.536 { 00:35:15.536 "code": -17, 00:35:15.536 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:15.536 } 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.536 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.795 [2024-11-26 17:32:16.259389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:15.795 [2024-11-26 17:32:16.259465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:15.795 [2024-11-26 17:32:16.259488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:15.795 [2024-11-26 17:32:16.259498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:15.795 [2024-11-26 17:32:16.262113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:15.795 [2024-11-26 17:32:16.262203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:15.795 [2024-11-26 17:32:16.262329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:15.795 [2024-11-26 17:32:16.262412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:15.795 pt1 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:15.795 "name": "raid_bdev1", 00:35:15.795 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:15.795 "strip_size_kb": 0, 00:35:15.795 "state": "configuring", 00:35:15.795 "raid_level": "raid1", 00:35:15.795 "superblock": true, 00:35:15.795 "num_base_bdevs": 3, 00:35:15.795 "num_base_bdevs_discovered": 1, 00:35:15.795 "num_base_bdevs_operational": 3, 00:35:15.795 "base_bdevs_list": [ 00:35:15.795 { 00:35:15.795 "name": "pt1", 00:35:15.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:15.795 "is_configured": true, 00:35:15.795 "data_offset": 2048, 00:35:15.795 "data_size": 63488 00:35:15.795 }, 00:35:15.795 { 00:35:15.795 "name": null, 00:35:15.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:15.795 "is_configured": false, 00:35:15.795 "data_offset": 2048, 00:35:15.795 "data_size": 63488 00:35:15.795 }, 00:35:15.795 { 00:35:15.795 "name": null, 00:35:15.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:15.795 "is_configured": false, 00:35:15.795 "data_offset": 2048, 00:35:15.795 "data_size": 63488 00:35:15.795 } 00:35:15.795 ] 00:35:15.795 }' 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:15.795 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.053 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.054 [2024-11-26 17:32:16.694638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:16.054 [2024-11-26 17:32:16.694760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:16.054 [2024-11-26 17:32:16.694802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:35:16.054 [2024-11-26 17:32:16.694831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:16.054 [2024-11-26 17:32:16.695332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:16.054 [2024-11-26 17:32:16.695400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:16.054 [2024-11-26 17:32:16.695545] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:16.054 [2024-11-26 17:32:16.695603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:16.054 pt2 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.054 [2024-11-26 17:32:16.702655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.054 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.313 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:16.313 "name": "raid_bdev1", 00:35:16.313 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:16.313 "strip_size_kb": 0, 00:35:16.313 "state": "configuring", 00:35:16.313 "raid_level": "raid1", 00:35:16.313 "superblock": true, 00:35:16.313 "num_base_bdevs": 3, 00:35:16.313 "num_base_bdevs_discovered": 1, 00:35:16.313 "num_base_bdevs_operational": 3, 00:35:16.313 "base_bdevs_list": [ 00:35:16.313 { 00:35:16.313 "name": "pt1", 00:35:16.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:16.313 "is_configured": true, 00:35:16.313 "data_offset": 2048, 00:35:16.313 "data_size": 63488 00:35:16.313 }, 00:35:16.313 { 00:35:16.313 "name": null, 00:35:16.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:16.313 "is_configured": false, 00:35:16.313 "data_offset": 0, 00:35:16.313 "data_size": 63488 00:35:16.313 }, 00:35:16.313 { 00:35:16.313 "name": null, 00:35:16.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:16.313 "is_configured": false, 00:35:16.313 "data_offset": 2048, 00:35:16.313 "data_size": 63488 00:35:16.313 } 00:35:16.313 ] 00:35:16.313 }' 00:35:16.313 17:32:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:16.313 17:32:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.572 [2024-11-26 17:32:17.149834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:16.572 [2024-11-26 17:32:17.149981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:16.572 [2024-11-26 17:32:17.150023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:35:16.572 [2024-11-26 17:32:17.150057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:16.572 [2024-11-26 17:32:17.150616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:16.572 [2024-11-26 17:32:17.150685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:16.572 [2024-11-26 17:32:17.150806] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:16.572 [2024-11-26 17:32:17.150872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:16.572 pt2 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.572 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.572 [2024-11-26 17:32:17.161780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:16.572 [2024-11-26 17:32:17.161871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:16.572 [2024-11-26 17:32:17.161902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:16.572 [2024-11-26 17:32:17.161932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:16.572 [2024-11-26 17:32:17.162356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:16.572 [2024-11-26 17:32:17.162434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:16.572 [2024-11-26 17:32:17.162540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:16.572 [2024-11-26 17:32:17.162598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:16.572 [2024-11-26 17:32:17.162760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:16.573 [2024-11-26 17:32:17.162778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:16.573 [2024-11-26 17:32:17.163014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:16.573 [2024-11-26 17:32:17.163162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:16.573 [2024-11-26 17:32:17.163170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:16.573 [2024-11-26 17:32:17.163302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:16.573 pt3 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:16.573 "name": "raid_bdev1", 00:35:16.573 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:16.573 "strip_size_kb": 0, 00:35:16.573 "state": "online", 00:35:16.573 "raid_level": "raid1", 00:35:16.573 "superblock": true, 00:35:16.573 "num_base_bdevs": 3, 00:35:16.573 "num_base_bdevs_discovered": 3, 00:35:16.573 "num_base_bdevs_operational": 3, 00:35:16.573 "base_bdevs_list": [ 00:35:16.573 { 00:35:16.573 "name": "pt1", 00:35:16.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:16.573 "is_configured": true, 00:35:16.573 "data_offset": 2048, 00:35:16.573 "data_size": 63488 00:35:16.573 }, 00:35:16.573 { 00:35:16.573 "name": "pt2", 00:35:16.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:16.573 "is_configured": true, 00:35:16.573 "data_offset": 2048, 00:35:16.573 "data_size": 63488 00:35:16.573 }, 00:35:16.573 { 00:35:16.573 "name": "pt3", 00:35:16.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:16.573 "is_configured": true, 00:35:16.573 "data_offset": 2048, 00:35:16.573 "data_size": 63488 00:35:16.573 } 00:35:16.573 ] 00:35:16.573 }' 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:16.573 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:17.168 [2024-11-26 17:32:17.637398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.168 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:17.168 "name": "raid_bdev1", 00:35:17.168 "aliases": [ 00:35:17.168 "d1c5143f-20f6-43c9-ae40-43a5e3976529" 00:35:17.168 ], 00:35:17.168 "product_name": "Raid Volume", 00:35:17.168 "block_size": 512, 00:35:17.168 "num_blocks": 63488, 00:35:17.168 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:17.168 "assigned_rate_limits": { 00:35:17.168 "rw_ios_per_sec": 0, 00:35:17.168 "rw_mbytes_per_sec": 0, 00:35:17.168 "r_mbytes_per_sec": 0, 00:35:17.168 "w_mbytes_per_sec": 0 00:35:17.168 }, 00:35:17.168 "claimed": false, 00:35:17.168 "zoned": false, 00:35:17.168 "supported_io_types": { 00:35:17.168 "read": true, 00:35:17.168 "write": true, 00:35:17.168 "unmap": false, 00:35:17.168 "flush": false, 00:35:17.168 "reset": true, 00:35:17.168 "nvme_admin": false, 00:35:17.168 "nvme_io": false, 00:35:17.168 "nvme_io_md": false, 00:35:17.168 "write_zeroes": true, 00:35:17.168 "zcopy": false, 00:35:17.168 "get_zone_info": false, 00:35:17.168 "zone_management": false, 00:35:17.168 "zone_append": false, 00:35:17.168 "compare": false, 00:35:17.168 "compare_and_write": false, 00:35:17.168 "abort": false, 00:35:17.168 "seek_hole": false, 00:35:17.168 "seek_data": false, 00:35:17.168 "copy": false, 00:35:17.168 "nvme_iov_md": false 00:35:17.168 }, 00:35:17.168 "memory_domains": [ 00:35:17.168 { 00:35:17.168 "dma_device_id": "system", 00:35:17.168 "dma_device_type": 1 00:35:17.168 }, 00:35:17.168 { 00:35:17.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:17.168 "dma_device_type": 2 00:35:17.168 }, 00:35:17.168 { 00:35:17.168 "dma_device_id": "system", 00:35:17.168 "dma_device_type": 1 00:35:17.168 }, 00:35:17.168 { 00:35:17.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:17.169 "dma_device_type": 2 00:35:17.169 }, 00:35:17.169 { 00:35:17.169 "dma_device_id": "system", 00:35:17.169 "dma_device_type": 1 00:35:17.169 }, 00:35:17.169 { 00:35:17.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:17.169 "dma_device_type": 2 00:35:17.169 } 00:35:17.169 ], 00:35:17.169 "driver_specific": { 00:35:17.169 "raid": { 00:35:17.169 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:17.169 "strip_size_kb": 0, 00:35:17.169 "state": "online", 00:35:17.169 "raid_level": "raid1", 00:35:17.169 "superblock": true, 00:35:17.169 "num_base_bdevs": 3, 00:35:17.169 "num_base_bdevs_discovered": 3, 00:35:17.169 "num_base_bdevs_operational": 3, 00:35:17.169 "base_bdevs_list": [ 00:35:17.169 { 00:35:17.169 "name": "pt1", 00:35:17.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:17.169 "is_configured": true, 00:35:17.169 "data_offset": 2048, 00:35:17.169 "data_size": 63488 00:35:17.169 }, 00:35:17.169 { 00:35:17.169 "name": "pt2", 00:35:17.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:17.169 "is_configured": true, 00:35:17.169 "data_offset": 2048, 00:35:17.169 "data_size": 63488 00:35:17.169 }, 00:35:17.169 { 00:35:17.169 "name": "pt3", 00:35:17.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:17.169 "is_configured": true, 00:35:17.169 "data_offset": 2048, 00:35:17.169 "data_size": 63488 00:35:17.169 } 00:35:17.169 ] 00:35:17.169 } 00:35:17.169 } 00:35:17.169 }' 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:17.169 pt2 00:35:17.169 pt3' 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.169 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.427 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:35:17.428 [2024-11-26 17:32:17.928929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d1c5143f-20f6-43c9-ae40-43a5e3976529 '!=' d1c5143f-20f6-43c9-ae40-43a5e3976529 ']' 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.428 [2024-11-26 17:32:17.976586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.428 17:32:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.428 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.428 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:17.428 "name": "raid_bdev1", 00:35:17.428 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:17.428 "strip_size_kb": 0, 00:35:17.428 "state": "online", 00:35:17.428 "raid_level": "raid1", 00:35:17.428 "superblock": true, 00:35:17.428 "num_base_bdevs": 3, 00:35:17.428 "num_base_bdevs_discovered": 2, 00:35:17.428 "num_base_bdevs_operational": 2, 00:35:17.428 "base_bdevs_list": [ 00:35:17.428 { 00:35:17.428 "name": null, 00:35:17.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.428 "is_configured": false, 00:35:17.428 "data_offset": 0, 00:35:17.428 "data_size": 63488 00:35:17.428 }, 00:35:17.428 { 00:35:17.428 "name": "pt2", 00:35:17.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:17.428 "is_configured": true, 00:35:17.428 "data_offset": 2048, 00:35:17.428 "data_size": 63488 00:35:17.428 }, 00:35:17.428 { 00:35:17.428 "name": "pt3", 00:35:17.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:17.428 "is_configured": true, 00:35:17.428 "data_offset": 2048, 00:35:17.428 "data_size": 63488 00:35:17.428 } 00:35:17.428 ] 00:35:17.428 }' 00:35:17.428 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:17.428 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.995 [2024-11-26 17:32:18.471681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:17.995 [2024-11-26 17:32:18.471714] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:17.995 [2024-11-26 17:32:18.471806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:17.995 [2024-11-26 17:32:18.471899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:17.995 [2024-11-26 17:32:18.471916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:35:17.995 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.996 [2024-11-26 17:32:18.555555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:17.996 [2024-11-26 17:32:18.555637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.996 [2024-11-26 17:32:18.555658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:35:17.996 [2024-11-26 17:32:18.555668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.996 [2024-11-26 17:32:18.558063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.996 [2024-11-26 17:32:18.558111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:17.996 [2024-11-26 17:32:18.558208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:17.996 [2024-11-26 17:32:18.558257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:17.996 pt2 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:17.996 "name": "raid_bdev1", 00:35:17.996 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:17.996 "strip_size_kb": 0, 00:35:17.996 "state": "configuring", 00:35:17.996 "raid_level": "raid1", 00:35:17.996 "superblock": true, 00:35:17.996 "num_base_bdevs": 3, 00:35:17.996 "num_base_bdevs_discovered": 1, 00:35:17.996 "num_base_bdevs_operational": 2, 00:35:17.996 "base_bdevs_list": [ 00:35:17.996 { 00:35:17.996 "name": null, 00:35:17.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.996 "is_configured": false, 00:35:17.996 "data_offset": 2048, 00:35:17.996 "data_size": 63488 00:35:17.996 }, 00:35:17.996 { 00:35:17.996 "name": "pt2", 00:35:17.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:17.996 "is_configured": true, 00:35:17.996 "data_offset": 2048, 00:35:17.996 "data_size": 63488 00:35:17.996 }, 00:35:17.996 { 00:35:17.996 "name": null, 00:35:17.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:17.996 "is_configured": false, 00:35:17.996 "data_offset": 2048, 00:35:17.996 "data_size": 63488 00:35:17.996 } 00:35:17.996 ] 00:35:17.996 }' 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:17.996 17:32:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.575 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:35:18.575 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:35:18.575 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:35:18.575 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:18.575 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.575 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.575 [2024-11-26 17:32:19.078650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:18.575 [2024-11-26 17:32:19.078730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:18.575 [2024-11-26 17:32:19.078753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:35:18.575 [2024-11-26 17:32:19.078766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:18.575 [2024-11-26 17:32:19.079251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:18.575 [2024-11-26 17:32:19.079274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:18.575 [2024-11-26 17:32:19.079374] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:18.575 [2024-11-26 17:32:19.079404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:18.575 [2024-11-26 17:32:19.079557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:18.575 [2024-11-26 17:32:19.079572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:18.575 [2024-11-26 17:32:19.079885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:18.575 [2024-11-26 17:32:19.080071] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:18.575 [2024-11-26 17:32:19.080083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:35:18.575 [2024-11-26 17:32:19.080242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:18.576 pt3 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.576 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:18.576 "name": "raid_bdev1", 00:35:18.576 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:18.576 "strip_size_kb": 0, 00:35:18.576 "state": "online", 00:35:18.576 "raid_level": "raid1", 00:35:18.576 "superblock": true, 00:35:18.576 "num_base_bdevs": 3, 00:35:18.576 "num_base_bdevs_discovered": 2, 00:35:18.576 "num_base_bdevs_operational": 2, 00:35:18.576 "base_bdevs_list": [ 00:35:18.576 { 00:35:18.576 "name": null, 00:35:18.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:18.577 "is_configured": false, 00:35:18.577 "data_offset": 2048, 00:35:18.577 "data_size": 63488 00:35:18.577 }, 00:35:18.577 { 00:35:18.577 "name": "pt2", 00:35:18.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:18.577 "is_configured": true, 00:35:18.577 "data_offset": 2048, 00:35:18.577 "data_size": 63488 00:35:18.577 }, 00:35:18.577 { 00:35:18.577 "name": "pt3", 00:35:18.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:18.577 "is_configured": true, 00:35:18.577 "data_offset": 2048, 00:35:18.577 "data_size": 63488 00:35:18.577 } 00:35:18.577 ] 00:35:18.577 }' 00:35:18.577 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:18.577 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:18.844 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:18.844 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.844 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.104 [2024-11-26 17:32:19.541823] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:19.104 [2024-11-26 17:32:19.541928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:19.104 [2024-11-26 17:32:19.542045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:19.104 [2024-11-26 17:32:19.542152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:19.104 [2024-11-26 17:32:19.542240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.104 [2024-11-26 17:32:19.597761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:19.104 [2024-11-26 17:32:19.597884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.104 [2024-11-26 17:32:19.597926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:19.104 [2024-11-26 17:32:19.597960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.104 [2024-11-26 17:32:19.600377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.104 [2024-11-26 17:32:19.600463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:19.104 [2024-11-26 17:32:19.600616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:19.104 [2024-11-26 17:32:19.600714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:19.104 [2024-11-26 17:32:19.600913] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:19.104 [2024-11-26 17:32:19.600974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:19.104 [2024-11-26 17:32:19.601017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:35:19.104 [2024-11-26 17:32:19.601130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:19.104 pt1 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.104 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:19.104 "name": "raid_bdev1", 00:35:19.104 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:19.104 "strip_size_kb": 0, 00:35:19.104 "state": "configuring", 00:35:19.104 "raid_level": "raid1", 00:35:19.104 "superblock": true, 00:35:19.104 "num_base_bdevs": 3, 00:35:19.104 "num_base_bdevs_discovered": 1, 00:35:19.104 "num_base_bdevs_operational": 2, 00:35:19.104 "base_bdevs_list": [ 00:35:19.104 { 00:35:19.104 "name": null, 00:35:19.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:19.104 "is_configured": false, 00:35:19.104 "data_offset": 2048, 00:35:19.104 "data_size": 63488 00:35:19.104 }, 00:35:19.104 { 00:35:19.104 "name": "pt2", 00:35:19.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:19.104 "is_configured": true, 00:35:19.104 "data_offset": 2048, 00:35:19.104 "data_size": 63488 00:35:19.104 }, 00:35:19.104 { 00:35:19.104 "name": null, 00:35:19.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:19.104 "is_configured": false, 00:35:19.105 "data_offset": 2048, 00:35:19.105 "data_size": 63488 00:35:19.105 } 00:35:19.105 ] 00:35:19.105 }' 00:35:19.105 17:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:19.105 17:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.672 [2024-11-26 17:32:20.140875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:19.672 [2024-11-26 17:32:20.140958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.672 [2024-11-26 17:32:20.140985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:35:19.672 [2024-11-26 17:32:20.140996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.672 [2024-11-26 17:32:20.141547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.672 [2024-11-26 17:32:20.141570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:19.672 [2024-11-26 17:32:20.141670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:19.672 [2024-11-26 17:32:20.141693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:19.672 [2024-11-26 17:32:20.141845] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:35:19.672 [2024-11-26 17:32:20.141855] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:19.672 [2024-11-26 17:32:20.142135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:35:19.672 [2024-11-26 17:32:20.142323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:35:19.672 [2024-11-26 17:32:20.142341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:35:19.672 [2024-11-26 17:32:20.142499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:19.672 pt3 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:19.672 "name": "raid_bdev1", 00:35:19.672 "uuid": "d1c5143f-20f6-43c9-ae40-43a5e3976529", 00:35:19.672 "strip_size_kb": 0, 00:35:19.672 "state": "online", 00:35:19.672 "raid_level": "raid1", 00:35:19.672 "superblock": true, 00:35:19.672 "num_base_bdevs": 3, 00:35:19.672 "num_base_bdevs_discovered": 2, 00:35:19.672 "num_base_bdevs_operational": 2, 00:35:19.672 "base_bdevs_list": [ 00:35:19.672 { 00:35:19.672 "name": null, 00:35:19.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:19.672 "is_configured": false, 00:35:19.672 "data_offset": 2048, 00:35:19.672 "data_size": 63488 00:35:19.672 }, 00:35:19.672 { 00:35:19.672 "name": "pt2", 00:35:19.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:19.672 "is_configured": true, 00:35:19.672 "data_offset": 2048, 00:35:19.672 "data_size": 63488 00:35:19.672 }, 00:35:19.672 { 00:35:19.672 "name": "pt3", 00:35:19.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:19.672 "is_configured": true, 00:35:19.672 "data_offset": 2048, 00:35:19.672 "data_size": 63488 00:35:19.672 } 00:35:19.672 ] 00:35:19.672 }' 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:19.672 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.931 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:35:19.931 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.931 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.931 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:19.931 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:35:20.189 [2024-11-26 17:32:20.644333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d1c5143f-20f6-43c9-ae40-43a5e3976529 '!=' d1c5143f-20f6-43c9-ae40-43a5e3976529 ']' 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68892 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68892 ']' 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68892 00:35:20.189 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:35:20.190 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.190 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68892 00:35:20.190 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:20.190 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:20.190 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68892' 00:35:20.190 killing process with pid 68892 00:35:20.190 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68892 00:35:20.190 [2024-11-26 17:32:20.728850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:20.190 [2024-11-26 17:32:20.728956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:20.190 [2024-11-26 17:32:20.729026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:20.190 17:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68892 00:35:20.190 [2024-11-26 17:32:20.729039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:35:20.448 [2024-11-26 17:32:21.048750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:21.821 17:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:35:21.821 00:35:21.821 real 0m8.192s 00:35:21.821 user 0m12.822s 00:35:21.821 sys 0m1.440s 00:35:21.821 17:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.821 ************************************ 00:35:21.821 END TEST raid_superblock_test 00:35:21.821 ************************************ 00:35:21.821 17:32:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.821 17:32:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:35:21.821 17:32:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:21.821 17:32:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.821 17:32:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:21.821 ************************************ 00:35:21.821 START TEST raid_read_error_test 00:35:21.821 ************************************ 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gI2STm9M7i 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69343 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69343 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69343 ']' 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:21.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:21.821 17:32:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.821 [2024-11-26 17:32:22.472424] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:21.821 [2024-11-26 17:32:22.472684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69343 ] 00:35:22.080 [2024-11-26 17:32:22.654164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.338 [2024-11-26 17:32:22.778122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.338 [2024-11-26 17:32:22.993696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:22.338 [2024-11-26 17:32:22.993847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.904 BaseBdev1_malloc 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.904 true 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:35:22.904 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.905 [2024-11-26 17:32:23.432879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:35:22.905 [2024-11-26 17:32:23.432956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.905 [2024-11-26 17:32:23.432984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:22.905 [2024-11-26 17:32:23.433000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.905 [2024-11-26 17:32:23.435548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.905 [2024-11-26 17:32:23.435665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:22.905 BaseBdev1 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.905 BaseBdev2_malloc 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.905 true 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.905 [2024-11-26 17:32:23.500295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:35:22.905 [2024-11-26 17:32:23.500383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.905 [2024-11-26 17:32:23.500409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:22.905 [2024-11-26 17:32:23.500422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.905 [2024-11-26 17:32:23.502986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.905 [2024-11-26 17:32:23.503117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:22.905 BaseBdev2 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.905 BaseBdev3_malloc 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.905 true 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.905 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.163 [2024-11-26 17:32:23.601375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:35:23.163 [2024-11-26 17:32:23.601470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.163 [2024-11-26 17:32:23.601504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:23.163 [2024-11-26 17:32:23.601539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.163 [2024-11-26 17:32:23.604292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.163 [2024-11-26 17:32:23.604350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:23.163 BaseBdev3 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.163 [2024-11-26 17:32:23.617525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:23.163 [2024-11-26 17:32:23.619927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:23.163 [2024-11-26 17:32:23.620026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:23.163 [2024-11-26 17:32:23.620285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:23.163 [2024-11-26 17:32:23.620302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:23.163 [2024-11-26 17:32:23.620653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:35:23.163 [2024-11-26 17:32:23.620880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:23.163 [2024-11-26 17:32:23.620895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:35:23.163 [2024-11-26 17:32:23.621111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:23.163 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:23.164 "name": "raid_bdev1", 00:35:23.164 "uuid": "53cad08e-69a7-4f7e-b8fe-c361fd36f47d", 00:35:23.164 "strip_size_kb": 0, 00:35:23.164 "state": "online", 00:35:23.164 "raid_level": "raid1", 00:35:23.164 "superblock": true, 00:35:23.164 "num_base_bdevs": 3, 00:35:23.164 "num_base_bdevs_discovered": 3, 00:35:23.164 "num_base_bdevs_operational": 3, 00:35:23.164 "base_bdevs_list": [ 00:35:23.164 { 00:35:23.164 "name": "BaseBdev1", 00:35:23.164 "uuid": "ad03d13c-242c-59f9-ba7d-4c4e696b1aae", 00:35:23.164 "is_configured": true, 00:35:23.164 "data_offset": 2048, 00:35:23.164 "data_size": 63488 00:35:23.164 }, 00:35:23.164 { 00:35:23.164 "name": "BaseBdev2", 00:35:23.164 "uuid": "a1622f55-6ebc-5960-8d69-bdf494bc1343", 00:35:23.164 "is_configured": true, 00:35:23.164 "data_offset": 2048, 00:35:23.164 "data_size": 63488 00:35:23.164 }, 00:35:23.164 { 00:35:23.164 "name": "BaseBdev3", 00:35:23.164 "uuid": "ded26598-eff0-5536-a45f-835402c678c0", 00:35:23.164 "is_configured": true, 00:35:23.164 "data_offset": 2048, 00:35:23.164 "data_size": 63488 00:35:23.164 } 00:35:23.164 ] 00:35:23.164 }' 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:23.164 17:32:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.422 17:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:35:23.422 17:32:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:23.682 [2024-11-26 17:32:24.241983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:24.646 "name": "raid_bdev1", 00:35:24.646 "uuid": "53cad08e-69a7-4f7e-b8fe-c361fd36f47d", 00:35:24.646 "strip_size_kb": 0, 00:35:24.646 "state": "online", 00:35:24.646 "raid_level": "raid1", 00:35:24.646 "superblock": true, 00:35:24.646 "num_base_bdevs": 3, 00:35:24.646 "num_base_bdevs_discovered": 3, 00:35:24.646 "num_base_bdevs_operational": 3, 00:35:24.646 "base_bdevs_list": [ 00:35:24.646 { 00:35:24.646 "name": "BaseBdev1", 00:35:24.646 "uuid": "ad03d13c-242c-59f9-ba7d-4c4e696b1aae", 00:35:24.646 "is_configured": true, 00:35:24.646 "data_offset": 2048, 00:35:24.646 "data_size": 63488 00:35:24.646 }, 00:35:24.646 { 00:35:24.646 "name": "BaseBdev2", 00:35:24.646 "uuid": "a1622f55-6ebc-5960-8d69-bdf494bc1343", 00:35:24.646 "is_configured": true, 00:35:24.646 "data_offset": 2048, 00:35:24.646 "data_size": 63488 00:35:24.646 }, 00:35:24.646 { 00:35:24.646 "name": "BaseBdev3", 00:35:24.646 "uuid": "ded26598-eff0-5536-a45f-835402c678c0", 00:35:24.646 "is_configured": true, 00:35:24.646 "data_offset": 2048, 00:35:24.646 "data_size": 63488 00:35:24.646 } 00:35:24.646 ] 00:35:24.646 }' 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:24.646 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.905 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:24.905 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.905 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.165 [2024-11-26 17:32:25.600321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:25.165 [2024-11-26 17:32:25.600355] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:25.165 [2024-11-26 17:32:25.603185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:25.165 [2024-11-26 17:32:25.603234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:25.165 [2024-11-26 17:32:25.603335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:25.165 [2024-11-26 17:32:25.603345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69343 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69343 ']' 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69343 00:35:25.165 { 00:35:25.165 "results": [ 00:35:25.165 { 00:35:25.165 "job": "raid_bdev1", 00:35:25.165 "core_mask": "0x1", 00:35:25.165 "workload": "randrw", 00:35:25.165 "percentage": 50, 00:35:25.165 "status": "finished", 00:35:25.165 "queue_depth": 1, 00:35:25.165 "io_size": 131072, 00:35:25.165 "runtime": 1.358874, 00:35:25.165 "iops": 11266.681090373353, 00:35:25.165 "mibps": 1408.3351362966691, 00:35:25.165 "io_failed": 0, 00:35:25.165 "io_timeout": 0, 00:35:25.165 "avg_latency_us": 85.44278106897053, 00:35:25.165 "min_latency_us": 27.165065502183406, 00:35:25.165 "max_latency_us": 1974.665502183406 00:35:25.165 } 00:35:25.165 ], 00:35:25.165 "core_count": 1 00:35:25.165 } 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69343 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69343' 00:35:25.165 killing process with pid 69343 00:35:25.165 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69343 00:35:25.166 17:32:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69343 00:35:25.166 [2024-11-26 17:32:25.648334] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:25.425 [2024-11-26 17:32:25.898501] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:26.815 17:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gI2STm9M7i 00:35:26.815 17:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:35:26.815 17:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:35:26.815 17:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:35:26.815 17:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:35:26.815 17:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:26.815 17:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:35:26.815 17:32:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:35:26.815 ************************************ 00:35:26.815 END TEST raid_read_error_test 00:35:26.815 ************************************ 00:35:26.815 00:35:26.815 real 0m4.801s 00:35:26.815 user 0m5.801s 00:35:26.815 sys 0m0.564s 00:35:26.815 17:32:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.815 17:32:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:26.815 17:32:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:35:26.815 17:32:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:26.815 17:32:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.815 17:32:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:26.815 ************************************ 00:35:26.815 START TEST raid_write_error_test 00:35:26.815 ************************************ 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.o3EdyzbGbE 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69489 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69489 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69489 ']' 00:35:26.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.815 17:32:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:26.815 [2024-11-26 17:32:27.355273] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:26.815 [2024-11-26 17:32:27.356129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69489 ] 00:35:27.074 [2024-11-26 17:32:27.538172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.074 [2024-11-26 17:32:27.669032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.333 [2024-11-26 17:32:27.896986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:27.333 [2024-11-26 17:32:27.897041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.592 BaseBdev1_malloc 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.592 true 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.592 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.853 [2024-11-26 17:32:28.286032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:35:27.853 [2024-11-26 17:32:28.286100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:27.853 [2024-11-26 17:32:28.286125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:27.853 [2024-11-26 17:32:28.286137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:27.853 [2024-11-26 17:32:28.288466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:27.853 [2024-11-26 17:32:28.288529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:27.853 BaseBdev1 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.853 BaseBdev2_malloc 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.853 true 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.853 [2024-11-26 17:32:28.359356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:35:27.853 [2024-11-26 17:32:28.359428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:27.853 [2024-11-26 17:32:28.359450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:35:27.853 [2024-11-26 17:32:28.359461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:27.853 [2024-11-26 17:32:28.362088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:27.853 [2024-11-26 17:32:28.362142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:27.853 BaseBdev2 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.853 BaseBdev3_malloc 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.853 true 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.853 [2024-11-26 17:32:28.441078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:35:27.853 [2024-11-26 17:32:28.441189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:27.853 [2024-11-26 17:32:28.441226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:27.853 [2024-11-26 17:32:28.441257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:27.853 [2024-11-26 17:32:28.443401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:27.853 [2024-11-26 17:32:28.443477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:27.853 BaseBdev3 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.853 [2024-11-26 17:32:28.453136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:27.853 [2024-11-26 17:32:28.455002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:27.853 [2024-11-26 17:32:28.455118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:27.853 [2024-11-26 17:32:28.455346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:27.853 [2024-11-26 17:32:28.455394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:27.853 [2024-11-26 17:32:28.455665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:35:27.853 [2024-11-26 17:32:28.455886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:27.853 [2024-11-26 17:32:28.455932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:35:27.853 [2024-11-26 17:32:28.456109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.853 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:27.853 "name": "raid_bdev1", 00:35:27.853 "uuid": "33d95674-eb3a-49f3-aa74-1e38abd620ef", 00:35:27.853 "strip_size_kb": 0, 00:35:27.853 "state": "online", 00:35:27.854 "raid_level": "raid1", 00:35:27.854 "superblock": true, 00:35:27.854 "num_base_bdevs": 3, 00:35:27.854 "num_base_bdevs_discovered": 3, 00:35:27.854 "num_base_bdevs_operational": 3, 00:35:27.854 "base_bdevs_list": [ 00:35:27.854 { 00:35:27.854 "name": "BaseBdev1", 00:35:27.854 "uuid": "85d9d81c-8617-5cae-8d11-4f0b60953f0b", 00:35:27.854 "is_configured": true, 00:35:27.854 "data_offset": 2048, 00:35:27.854 "data_size": 63488 00:35:27.854 }, 00:35:27.854 { 00:35:27.854 "name": "BaseBdev2", 00:35:27.854 "uuid": "679cd6e6-1eb1-5bf8-9721-bbddf42e64a9", 00:35:27.854 "is_configured": true, 00:35:27.854 "data_offset": 2048, 00:35:27.854 "data_size": 63488 00:35:27.854 }, 00:35:27.854 { 00:35:27.854 "name": "BaseBdev3", 00:35:27.854 "uuid": "66341da7-1a48-5a4c-9dc7-026dc1e7a966", 00:35:27.854 "is_configured": true, 00:35:27.854 "data_offset": 2048, 00:35:27.854 "data_size": 63488 00:35:27.854 } 00:35:27.854 ] 00:35:27.854 }' 00:35:27.854 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:27.854 17:32:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.421 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:35:28.421 17:32:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:28.421 [2024-11-26 17:32:28.997883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.360 [2024-11-26 17:32:29.906641] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:35:29.360 [2024-11-26 17:32:29.906793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:29.360 [2024-11-26 17:32:29.907077] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:29.360 "name": "raid_bdev1", 00:35:29.360 "uuid": "33d95674-eb3a-49f3-aa74-1e38abd620ef", 00:35:29.360 "strip_size_kb": 0, 00:35:29.360 "state": "online", 00:35:29.360 "raid_level": "raid1", 00:35:29.360 "superblock": true, 00:35:29.360 "num_base_bdevs": 3, 00:35:29.360 "num_base_bdevs_discovered": 2, 00:35:29.360 "num_base_bdevs_operational": 2, 00:35:29.360 "base_bdevs_list": [ 00:35:29.360 { 00:35:29.360 "name": null, 00:35:29.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:29.360 "is_configured": false, 00:35:29.360 "data_offset": 0, 00:35:29.360 "data_size": 63488 00:35:29.360 }, 00:35:29.360 { 00:35:29.360 "name": "BaseBdev2", 00:35:29.360 "uuid": "679cd6e6-1eb1-5bf8-9721-bbddf42e64a9", 00:35:29.360 "is_configured": true, 00:35:29.360 "data_offset": 2048, 00:35:29.360 "data_size": 63488 00:35:29.360 }, 00:35:29.360 { 00:35:29.360 "name": "BaseBdev3", 00:35:29.360 "uuid": "66341da7-1a48-5a4c-9dc7-026dc1e7a966", 00:35:29.360 "is_configured": true, 00:35:29.360 "data_offset": 2048, 00:35:29.360 "data_size": 63488 00:35:29.360 } 00:35:29.360 ] 00:35:29.360 }' 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:29.360 17:32:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.929 [2024-11-26 17:32:30.357237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:29.929 [2024-11-26 17:32:30.357375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:29.929 [2024-11-26 17:32:30.360733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:29.929 [2024-11-26 17:32:30.360806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:29.929 [2024-11-26 17:32:30.360899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:29.929 [2024-11-26 17:32:30.360917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:35:29.929 { 00:35:29.929 "results": [ 00:35:29.929 { 00:35:29.929 "job": "raid_bdev1", 00:35:29.929 "core_mask": "0x1", 00:35:29.929 "workload": "randrw", 00:35:29.929 "percentage": 50, 00:35:29.929 "status": "finished", 00:35:29.929 "queue_depth": 1, 00:35:29.929 "io_size": 131072, 00:35:29.929 "runtime": 1.359687, 00:35:29.929 "iops": 12772.057098435154, 00:35:29.929 "mibps": 1596.5071373043943, 00:35:29.929 "io_failed": 0, 00:35:29.929 "io_timeout": 0, 00:35:29.929 "avg_latency_us": 75.09808364182987, 00:35:29.929 "min_latency_us": 25.823580786026202, 00:35:29.929 "max_latency_us": 1459.5353711790392 00:35:29.929 } 00:35:29.929 ], 00:35:29.929 "core_count": 1 00:35:29.929 } 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69489 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69489 ']' 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69489 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69489 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:29.929 killing process with pid 69489 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69489' 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69489 00:35:29.929 [2024-11-26 17:32:30.407501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:29.929 17:32:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69489 00:35:30.188 [2024-11-26 17:32:30.671215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:31.569 17:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:35:31.569 17:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.o3EdyzbGbE 00:35:31.569 17:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:35:31.569 17:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:35:31.569 17:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:35:31.569 17:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:31.569 17:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:35:31.569 17:32:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:35:31.569 00:35:31.569 real 0m4.755s 00:35:31.569 user 0m5.612s 00:35:31.569 sys 0m0.597s 00:35:31.569 17:32:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.569 17:32:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.569 ************************************ 00:35:31.569 END TEST raid_write_error_test 00:35:31.569 ************************************ 00:35:31.569 17:32:32 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:35:31.569 17:32:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:35:31.569 17:32:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:35:31.569 17:32:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:31.569 17:32:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.569 17:32:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:31.569 ************************************ 00:35:31.569 START TEST raid_state_function_test 00:35:31.569 ************************************ 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69632 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69632' 00:35:31.569 Process raid pid: 69632 00:35:31.569 17:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69632 00:35:31.570 17:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69632 ']' 00:35:31.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.570 17:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.570 17:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.570 17:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.570 17:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.570 17:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:31.570 [2024-11-26 17:32:32.157346] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:31.570 [2024-11-26 17:32:32.157875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.830 [2024-11-26 17:32:32.333619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.830 [2024-11-26 17:32:32.457429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.089 [2024-11-26 17:32:32.682163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:32.089 [2024-11-26 17:32:32.682222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:32.349 17:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.349 [2024-11-26 17:32:33.004900] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:32.349 [2024-11-26 17:32:33.004959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:32.349 [2024-11-26 17:32:33.004971] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:32.349 [2024-11-26 17:32:33.004999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:32.349 [2024-11-26 17:32:33.005007] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:32.349 [2024-11-26 17:32:33.005017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:32.349 [2024-11-26 17:32:33.005025] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:32.349 [2024-11-26 17:32:33.005035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:32.349 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:32.350 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:32.350 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:32.350 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:32.350 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.350 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.350 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.609 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:32.609 "name": "Existed_Raid", 00:35:32.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.609 "strip_size_kb": 64, 00:35:32.609 "state": "configuring", 00:35:32.609 "raid_level": "raid0", 00:35:32.609 "superblock": false, 00:35:32.609 "num_base_bdevs": 4, 00:35:32.609 "num_base_bdevs_discovered": 0, 00:35:32.609 "num_base_bdevs_operational": 4, 00:35:32.609 "base_bdevs_list": [ 00:35:32.609 { 00:35:32.609 "name": "BaseBdev1", 00:35:32.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.609 "is_configured": false, 00:35:32.609 "data_offset": 0, 00:35:32.609 "data_size": 0 00:35:32.609 }, 00:35:32.609 { 00:35:32.609 "name": "BaseBdev2", 00:35:32.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.609 "is_configured": false, 00:35:32.609 "data_offset": 0, 00:35:32.609 "data_size": 0 00:35:32.609 }, 00:35:32.609 { 00:35:32.609 "name": "BaseBdev3", 00:35:32.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.609 "is_configured": false, 00:35:32.609 "data_offset": 0, 00:35:32.609 "data_size": 0 00:35:32.609 }, 00:35:32.609 { 00:35:32.609 "name": "BaseBdev4", 00:35:32.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.609 "is_configured": false, 00:35:32.609 "data_offset": 0, 00:35:32.609 "data_size": 0 00:35:32.609 } 00:35:32.609 ] 00:35:32.609 }' 00:35:32.609 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:32.609 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.869 [2024-11-26 17:32:33.456139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:32.869 [2024-11-26 17:32:33.456275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.869 [2024-11-26 17:32:33.464148] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:32.869 [2024-11-26 17:32:33.464263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:32.869 [2024-11-26 17:32:33.464323] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:32.869 [2024-11-26 17:32:33.464351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:32.869 [2024-11-26 17:32:33.464398] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:32.869 [2024-11-26 17:32:33.464425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:32.869 [2024-11-26 17:32:33.464479] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:32.869 [2024-11-26 17:32:33.464547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.869 [2024-11-26 17:32:33.512548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:32.869 BaseBdev1 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.869 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.869 [ 00:35:32.869 { 00:35:32.869 "name": "BaseBdev1", 00:35:32.869 "aliases": [ 00:35:32.869 "bceb9222-0130-445b-a73d-52b8c81cf60c" 00:35:32.869 ], 00:35:32.870 "product_name": "Malloc disk", 00:35:32.870 "block_size": 512, 00:35:32.870 "num_blocks": 65536, 00:35:32.870 "uuid": "bceb9222-0130-445b-a73d-52b8c81cf60c", 00:35:32.870 "assigned_rate_limits": { 00:35:32.870 "rw_ios_per_sec": 0, 00:35:32.870 "rw_mbytes_per_sec": 0, 00:35:32.870 "r_mbytes_per_sec": 0, 00:35:32.870 "w_mbytes_per_sec": 0 00:35:32.870 }, 00:35:32.870 "claimed": true, 00:35:32.870 "claim_type": "exclusive_write", 00:35:32.870 "zoned": false, 00:35:32.870 "supported_io_types": { 00:35:32.870 "read": true, 00:35:32.870 "write": true, 00:35:32.870 "unmap": true, 00:35:32.870 "flush": true, 00:35:32.870 "reset": true, 00:35:32.870 "nvme_admin": false, 00:35:32.870 "nvme_io": false, 00:35:32.870 "nvme_io_md": false, 00:35:32.870 "write_zeroes": true, 00:35:32.870 "zcopy": true, 00:35:32.870 "get_zone_info": false, 00:35:32.870 "zone_management": false, 00:35:32.870 "zone_append": false, 00:35:32.870 "compare": false, 00:35:32.870 "compare_and_write": false, 00:35:32.870 "abort": true, 00:35:32.870 "seek_hole": false, 00:35:32.870 "seek_data": false, 00:35:32.870 "copy": true, 00:35:32.870 "nvme_iov_md": false 00:35:32.870 }, 00:35:32.870 "memory_domains": [ 00:35:32.870 { 00:35:32.870 "dma_device_id": "system", 00:35:32.870 "dma_device_type": 1 00:35:32.870 }, 00:35:32.870 { 00:35:32.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:32.870 "dma_device_type": 2 00:35:32.870 } 00:35:32.870 ], 00:35:32.870 "driver_specific": {} 00:35:32.870 } 00:35:32.870 ] 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.870 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:33.129 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.129 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:33.129 "name": "Existed_Raid", 00:35:33.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.129 "strip_size_kb": 64, 00:35:33.129 "state": "configuring", 00:35:33.129 "raid_level": "raid0", 00:35:33.129 "superblock": false, 00:35:33.129 "num_base_bdevs": 4, 00:35:33.129 "num_base_bdevs_discovered": 1, 00:35:33.129 "num_base_bdevs_operational": 4, 00:35:33.129 "base_bdevs_list": [ 00:35:33.129 { 00:35:33.129 "name": "BaseBdev1", 00:35:33.129 "uuid": "bceb9222-0130-445b-a73d-52b8c81cf60c", 00:35:33.129 "is_configured": true, 00:35:33.129 "data_offset": 0, 00:35:33.129 "data_size": 65536 00:35:33.129 }, 00:35:33.129 { 00:35:33.129 "name": "BaseBdev2", 00:35:33.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.129 "is_configured": false, 00:35:33.129 "data_offset": 0, 00:35:33.129 "data_size": 0 00:35:33.129 }, 00:35:33.129 { 00:35:33.129 "name": "BaseBdev3", 00:35:33.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.129 "is_configured": false, 00:35:33.129 "data_offset": 0, 00:35:33.129 "data_size": 0 00:35:33.129 }, 00:35:33.129 { 00:35:33.129 "name": "BaseBdev4", 00:35:33.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.129 "is_configured": false, 00:35:33.129 "data_offset": 0, 00:35:33.129 "data_size": 0 00:35:33.129 } 00:35:33.129 ] 00:35:33.129 }' 00:35:33.129 17:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:33.129 17:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.415 [2024-11-26 17:32:34.023748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:33.415 [2024-11-26 17:32:34.023810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.415 [2024-11-26 17:32:34.035838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:33.415 [2024-11-26 17:32:34.038062] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:33.415 [2024-11-26 17:32:34.038153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:33.415 [2024-11-26 17:32:34.038190] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:33.415 [2024-11-26 17:32:34.038219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:33.415 [2024-11-26 17:32:34.038255] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:33.415 [2024-11-26 17:32:34.038281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:33.415 "name": "Existed_Raid", 00:35:33.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.415 "strip_size_kb": 64, 00:35:33.415 "state": "configuring", 00:35:33.415 "raid_level": "raid0", 00:35:33.415 "superblock": false, 00:35:33.415 "num_base_bdevs": 4, 00:35:33.415 "num_base_bdevs_discovered": 1, 00:35:33.415 "num_base_bdevs_operational": 4, 00:35:33.415 "base_bdevs_list": [ 00:35:33.415 { 00:35:33.415 "name": "BaseBdev1", 00:35:33.415 "uuid": "bceb9222-0130-445b-a73d-52b8c81cf60c", 00:35:33.415 "is_configured": true, 00:35:33.415 "data_offset": 0, 00:35:33.415 "data_size": 65536 00:35:33.415 }, 00:35:33.415 { 00:35:33.415 "name": "BaseBdev2", 00:35:33.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.415 "is_configured": false, 00:35:33.415 "data_offset": 0, 00:35:33.415 "data_size": 0 00:35:33.415 }, 00:35:33.415 { 00:35:33.415 "name": "BaseBdev3", 00:35:33.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.415 "is_configured": false, 00:35:33.415 "data_offset": 0, 00:35:33.415 "data_size": 0 00:35:33.415 }, 00:35:33.415 { 00:35:33.415 "name": "BaseBdev4", 00:35:33.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.415 "is_configured": false, 00:35:33.415 "data_offset": 0, 00:35:33.415 "data_size": 0 00:35:33.415 } 00:35:33.415 ] 00:35:33.415 }' 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:33.415 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.984 [2024-11-26 17:32:34.520342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:33.984 BaseBdev2 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.984 [ 00:35:33.984 { 00:35:33.984 "name": "BaseBdev2", 00:35:33.984 "aliases": [ 00:35:33.984 "88287bd8-32b1-4230-8101-1921f689ed8e" 00:35:33.984 ], 00:35:33.984 "product_name": "Malloc disk", 00:35:33.984 "block_size": 512, 00:35:33.984 "num_blocks": 65536, 00:35:33.984 "uuid": "88287bd8-32b1-4230-8101-1921f689ed8e", 00:35:33.984 "assigned_rate_limits": { 00:35:33.984 "rw_ios_per_sec": 0, 00:35:33.984 "rw_mbytes_per_sec": 0, 00:35:33.984 "r_mbytes_per_sec": 0, 00:35:33.984 "w_mbytes_per_sec": 0 00:35:33.984 }, 00:35:33.984 "claimed": true, 00:35:33.984 "claim_type": "exclusive_write", 00:35:33.984 "zoned": false, 00:35:33.984 "supported_io_types": { 00:35:33.984 "read": true, 00:35:33.984 "write": true, 00:35:33.984 "unmap": true, 00:35:33.984 "flush": true, 00:35:33.984 "reset": true, 00:35:33.984 "nvme_admin": false, 00:35:33.984 "nvme_io": false, 00:35:33.984 "nvme_io_md": false, 00:35:33.984 "write_zeroes": true, 00:35:33.984 "zcopy": true, 00:35:33.984 "get_zone_info": false, 00:35:33.984 "zone_management": false, 00:35:33.984 "zone_append": false, 00:35:33.984 "compare": false, 00:35:33.984 "compare_and_write": false, 00:35:33.984 "abort": true, 00:35:33.984 "seek_hole": false, 00:35:33.984 "seek_data": false, 00:35:33.984 "copy": true, 00:35:33.984 "nvme_iov_md": false 00:35:33.984 }, 00:35:33.984 "memory_domains": [ 00:35:33.984 { 00:35:33.984 "dma_device_id": "system", 00:35:33.984 "dma_device_type": 1 00:35:33.984 }, 00:35:33.984 { 00:35:33.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:33.984 "dma_device_type": 2 00:35:33.984 } 00:35:33.984 ], 00:35:33.984 "driver_specific": {} 00:35:33.984 } 00:35:33.984 ] 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:33.984 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:33.985 "name": "Existed_Raid", 00:35:33.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.985 "strip_size_kb": 64, 00:35:33.985 "state": "configuring", 00:35:33.985 "raid_level": "raid0", 00:35:33.985 "superblock": false, 00:35:33.985 "num_base_bdevs": 4, 00:35:33.985 "num_base_bdevs_discovered": 2, 00:35:33.985 "num_base_bdevs_operational": 4, 00:35:33.985 "base_bdevs_list": [ 00:35:33.985 { 00:35:33.985 "name": "BaseBdev1", 00:35:33.985 "uuid": "bceb9222-0130-445b-a73d-52b8c81cf60c", 00:35:33.985 "is_configured": true, 00:35:33.985 "data_offset": 0, 00:35:33.985 "data_size": 65536 00:35:33.985 }, 00:35:33.985 { 00:35:33.985 "name": "BaseBdev2", 00:35:33.985 "uuid": "88287bd8-32b1-4230-8101-1921f689ed8e", 00:35:33.985 "is_configured": true, 00:35:33.985 "data_offset": 0, 00:35:33.985 "data_size": 65536 00:35:33.985 }, 00:35:33.985 { 00:35:33.985 "name": "BaseBdev3", 00:35:33.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.985 "is_configured": false, 00:35:33.985 "data_offset": 0, 00:35:33.985 "data_size": 0 00:35:33.985 }, 00:35:33.985 { 00:35:33.985 "name": "BaseBdev4", 00:35:33.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:33.985 "is_configured": false, 00:35:33.985 "data_offset": 0, 00:35:33.985 "data_size": 0 00:35:33.985 } 00:35:33.985 ] 00:35:33.985 }' 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:33.985 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.553 17:32:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:34.553 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.553 17:32:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.553 [2024-11-26 17:32:35.053338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:34.553 BaseBdev3 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.553 [ 00:35:34.553 { 00:35:34.553 "name": "BaseBdev3", 00:35:34.553 "aliases": [ 00:35:34.553 "aa6621ba-e3f4-4dc2-bf9b-a5acf863f238" 00:35:34.553 ], 00:35:34.553 "product_name": "Malloc disk", 00:35:34.553 "block_size": 512, 00:35:34.553 "num_blocks": 65536, 00:35:34.553 "uuid": "aa6621ba-e3f4-4dc2-bf9b-a5acf863f238", 00:35:34.553 "assigned_rate_limits": { 00:35:34.553 "rw_ios_per_sec": 0, 00:35:34.553 "rw_mbytes_per_sec": 0, 00:35:34.553 "r_mbytes_per_sec": 0, 00:35:34.553 "w_mbytes_per_sec": 0 00:35:34.553 }, 00:35:34.553 "claimed": true, 00:35:34.553 "claim_type": "exclusive_write", 00:35:34.553 "zoned": false, 00:35:34.553 "supported_io_types": { 00:35:34.553 "read": true, 00:35:34.553 "write": true, 00:35:34.553 "unmap": true, 00:35:34.553 "flush": true, 00:35:34.553 "reset": true, 00:35:34.553 "nvme_admin": false, 00:35:34.553 "nvme_io": false, 00:35:34.553 "nvme_io_md": false, 00:35:34.553 "write_zeroes": true, 00:35:34.553 "zcopy": true, 00:35:34.553 "get_zone_info": false, 00:35:34.553 "zone_management": false, 00:35:34.553 "zone_append": false, 00:35:34.553 "compare": false, 00:35:34.553 "compare_and_write": false, 00:35:34.553 "abort": true, 00:35:34.553 "seek_hole": false, 00:35:34.553 "seek_data": false, 00:35:34.553 "copy": true, 00:35:34.553 "nvme_iov_md": false 00:35:34.553 }, 00:35:34.553 "memory_domains": [ 00:35:34.553 { 00:35:34.553 "dma_device_id": "system", 00:35:34.553 "dma_device_type": 1 00:35:34.553 }, 00:35:34.553 { 00:35:34.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:34.553 "dma_device_type": 2 00:35:34.553 } 00:35:34.553 ], 00:35:34.553 "driver_specific": {} 00:35:34.553 } 00:35:34.553 ] 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:34.553 "name": "Existed_Raid", 00:35:34.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.553 "strip_size_kb": 64, 00:35:34.553 "state": "configuring", 00:35:34.553 "raid_level": "raid0", 00:35:34.553 "superblock": false, 00:35:34.553 "num_base_bdevs": 4, 00:35:34.553 "num_base_bdevs_discovered": 3, 00:35:34.553 "num_base_bdevs_operational": 4, 00:35:34.553 "base_bdevs_list": [ 00:35:34.553 { 00:35:34.553 "name": "BaseBdev1", 00:35:34.553 "uuid": "bceb9222-0130-445b-a73d-52b8c81cf60c", 00:35:34.553 "is_configured": true, 00:35:34.553 "data_offset": 0, 00:35:34.553 "data_size": 65536 00:35:34.553 }, 00:35:34.553 { 00:35:34.553 "name": "BaseBdev2", 00:35:34.553 "uuid": "88287bd8-32b1-4230-8101-1921f689ed8e", 00:35:34.553 "is_configured": true, 00:35:34.553 "data_offset": 0, 00:35:34.553 "data_size": 65536 00:35:34.553 }, 00:35:34.553 { 00:35:34.553 "name": "BaseBdev3", 00:35:34.553 "uuid": "aa6621ba-e3f4-4dc2-bf9b-a5acf863f238", 00:35:34.553 "is_configured": true, 00:35:34.553 "data_offset": 0, 00:35:34.553 "data_size": 65536 00:35:34.553 }, 00:35:34.553 { 00:35:34.553 "name": "BaseBdev4", 00:35:34.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:34.553 "is_configured": false, 00:35:34.553 "data_offset": 0, 00:35:34.553 "data_size": 0 00:35:34.553 } 00:35:34.553 ] 00:35:34.553 }' 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:34.553 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.121 [2024-11-26 17:32:35.574246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:35.121 [2024-11-26 17:32:35.574372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:35.121 [2024-11-26 17:32:35.574387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:35:35.121 [2024-11-26 17:32:35.574728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:35.121 [2024-11-26 17:32:35.574914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:35.121 [2024-11-26 17:32:35.574928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:35.121 [2024-11-26 17:32:35.575204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:35.121 BaseBdev4 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.121 [ 00:35:35.121 { 00:35:35.121 "name": "BaseBdev4", 00:35:35.121 "aliases": [ 00:35:35.121 "46c4cfed-d815-4849-9925-ba4a46262775" 00:35:35.121 ], 00:35:35.121 "product_name": "Malloc disk", 00:35:35.121 "block_size": 512, 00:35:35.121 "num_blocks": 65536, 00:35:35.121 "uuid": "46c4cfed-d815-4849-9925-ba4a46262775", 00:35:35.121 "assigned_rate_limits": { 00:35:35.121 "rw_ios_per_sec": 0, 00:35:35.121 "rw_mbytes_per_sec": 0, 00:35:35.121 "r_mbytes_per_sec": 0, 00:35:35.121 "w_mbytes_per_sec": 0 00:35:35.121 }, 00:35:35.121 "claimed": true, 00:35:35.121 "claim_type": "exclusive_write", 00:35:35.121 "zoned": false, 00:35:35.121 "supported_io_types": { 00:35:35.121 "read": true, 00:35:35.121 "write": true, 00:35:35.121 "unmap": true, 00:35:35.121 "flush": true, 00:35:35.121 "reset": true, 00:35:35.121 "nvme_admin": false, 00:35:35.121 "nvme_io": false, 00:35:35.121 "nvme_io_md": false, 00:35:35.121 "write_zeroes": true, 00:35:35.121 "zcopy": true, 00:35:35.121 "get_zone_info": false, 00:35:35.121 "zone_management": false, 00:35:35.121 "zone_append": false, 00:35:35.121 "compare": false, 00:35:35.121 "compare_and_write": false, 00:35:35.121 "abort": true, 00:35:35.121 "seek_hole": false, 00:35:35.121 "seek_data": false, 00:35:35.121 "copy": true, 00:35:35.121 "nvme_iov_md": false 00:35:35.121 }, 00:35:35.121 "memory_domains": [ 00:35:35.121 { 00:35:35.121 "dma_device_id": "system", 00:35:35.121 "dma_device_type": 1 00:35:35.121 }, 00:35:35.121 { 00:35:35.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:35.121 "dma_device_type": 2 00:35:35.121 } 00:35:35.121 ], 00:35:35.121 "driver_specific": {} 00:35:35.121 } 00:35:35.121 ] 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.121 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:35.121 "name": "Existed_Raid", 00:35:35.121 "uuid": "d412a738-7451-4a7d-bccf-51d393ee17e4", 00:35:35.121 "strip_size_kb": 64, 00:35:35.121 "state": "online", 00:35:35.121 "raid_level": "raid0", 00:35:35.121 "superblock": false, 00:35:35.121 "num_base_bdevs": 4, 00:35:35.121 "num_base_bdevs_discovered": 4, 00:35:35.122 "num_base_bdevs_operational": 4, 00:35:35.122 "base_bdevs_list": [ 00:35:35.122 { 00:35:35.122 "name": "BaseBdev1", 00:35:35.122 "uuid": "bceb9222-0130-445b-a73d-52b8c81cf60c", 00:35:35.122 "is_configured": true, 00:35:35.122 "data_offset": 0, 00:35:35.122 "data_size": 65536 00:35:35.122 }, 00:35:35.122 { 00:35:35.122 "name": "BaseBdev2", 00:35:35.122 "uuid": "88287bd8-32b1-4230-8101-1921f689ed8e", 00:35:35.122 "is_configured": true, 00:35:35.122 "data_offset": 0, 00:35:35.122 "data_size": 65536 00:35:35.122 }, 00:35:35.122 { 00:35:35.122 "name": "BaseBdev3", 00:35:35.122 "uuid": "aa6621ba-e3f4-4dc2-bf9b-a5acf863f238", 00:35:35.122 "is_configured": true, 00:35:35.122 "data_offset": 0, 00:35:35.122 "data_size": 65536 00:35:35.122 }, 00:35:35.122 { 00:35:35.122 "name": "BaseBdev4", 00:35:35.122 "uuid": "46c4cfed-d815-4849-9925-ba4a46262775", 00:35:35.122 "is_configured": true, 00:35:35.122 "data_offset": 0, 00:35:35.122 "data_size": 65536 00:35:35.122 } 00:35:35.122 ] 00:35:35.122 }' 00:35:35.122 17:32:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:35.122 17:32:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.380 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:35.380 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:35.380 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:35.380 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:35.380 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:35.380 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:35.380 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:35.380 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:35.380 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.380 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.380 [2024-11-26 17:32:36.061870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:35.639 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.639 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:35.639 "name": "Existed_Raid", 00:35:35.639 "aliases": [ 00:35:35.639 "d412a738-7451-4a7d-bccf-51d393ee17e4" 00:35:35.639 ], 00:35:35.639 "product_name": "Raid Volume", 00:35:35.639 "block_size": 512, 00:35:35.639 "num_blocks": 262144, 00:35:35.639 "uuid": "d412a738-7451-4a7d-bccf-51d393ee17e4", 00:35:35.639 "assigned_rate_limits": { 00:35:35.639 "rw_ios_per_sec": 0, 00:35:35.639 "rw_mbytes_per_sec": 0, 00:35:35.639 "r_mbytes_per_sec": 0, 00:35:35.639 "w_mbytes_per_sec": 0 00:35:35.639 }, 00:35:35.639 "claimed": false, 00:35:35.639 "zoned": false, 00:35:35.640 "supported_io_types": { 00:35:35.640 "read": true, 00:35:35.640 "write": true, 00:35:35.640 "unmap": true, 00:35:35.640 "flush": true, 00:35:35.640 "reset": true, 00:35:35.640 "nvme_admin": false, 00:35:35.640 "nvme_io": false, 00:35:35.640 "nvme_io_md": false, 00:35:35.640 "write_zeroes": true, 00:35:35.640 "zcopy": false, 00:35:35.640 "get_zone_info": false, 00:35:35.640 "zone_management": false, 00:35:35.640 "zone_append": false, 00:35:35.640 "compare": false, 00:35:35.640 "compare_and_write": false, 00:35:35.640 "abort": false, 00:35:35.640 "seek_hole": false, 00:35:35.640 "seek_data": false, 00:35:35.640 "copy": false, 00:35:35.640 "nvme_iov_md": false 00:35:35.640 }, 00:35:35.640 "memory_domains": [ 00:35:35.640 { 00:35:35.640 "dma_device_id": "system", 00:35:35.640 "dma_device_type": 1 00:35:35.640 }, 00:35:35.640 { 00:35:35.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:35.640 "dma_device_type": 2 00:35:35.640 }, 00:35:35.640 { 00:35:35.640 "dma_device_id": "system", 00:35:35.640 "dma_device_type": 1 00:35:35.640 }, 00:35:35.640 { 00:35:35.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:35.640 "dma_device_type": 2 00:35:35.640 }, 00:35:35.640 { 00:35:35.640 "dma_device_id": "system", 00:35:35.640 "dma_device_type": 1 00:35:35.640 }, 00:35:35.640 { 00:35:35.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:35.640 "dma_device_type": 2 00:35:35.640 }, 00:35:35.640 { 00:35:35.640 "dma_device_id": "system", 00:35:35.640 "dma_device_type": 1 00:35:35.640 }, 00:35:35.640 { 00:35:35.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:35.640 "dma_device_type": 2 00:35:35.640 } 00:35:35.640 ], 00:35:35.640 "driver_specific": { 00:35:35.640 "raid": { 00:35:35.640 "uuid": "d412a738-7451-4a7d-bccf-51d393ee17e4", 00:35:35.640 "strip_size_kb": 64, 00:35:35.640 "state": "online", 00:35:35.640 "raid_level": "raid0", 00:35:35.640 "superblock": false, 00:35:35.640 "num_base_bdevs": 4, 00:35:35.640 "num_base_bdevs_discovered": 4, 00:35:35.640 "num_base_bdevs_operational": 4, 00:35:35.640 "base_bdevs_list": [ 00:35:35.640 { 00:35:35.640 "name": "BaseBdev1", 00:35:35.640 "uuid": "bceb9222-0130-445b-a73d-52b8c81cf60c", 00:35:35.640 "is_configured": true, 00:35:35.640 "data_offset": 0, 00:35:35.640 "data_size": 65536 00:35:35.640 }, 00:35:35.640 { 00:35:35.640 "name": "BaseBdev2", 00:35:35.640 "uuid": "88287bd8-32b1-4230-8101-1921f689ed8e", 00:35:35.640 "is_configured": true, 00:35:35.640 "data_offset": 0, 00:35:35.640 "data_size": 65536 00:35:35.640 }, 00:35:35.640 { 00:35:35.640 "name": "BaseBdev3", 00:35:35.640 "uuid": "aa6621ba-e3f4-4dc2-bf9b-a5acf863f238", 00:35:35.640 "is_configured": true, 00:35:35.640 "data_offset": 0, 00:35:35.640 "data_size": 65536 00:35:35.640 }, 00:35:35.640 { 00:35:35.640 "name": "BaseBdev4", 00:35:35.640 "uuid": "46c4cfed-d815-4849-9925-ba4a46262775", 00:35:35.640 "is_configured": true, 00:35:35.640 "data_offset": 0, 00:35:35.640 "data_size": 65536 00:35:35.640 } 00:35:35.640 ] 00:35:35.640 } 00:35:35.640 } 00:35:35.640 }' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:35.640 BaseBdev2 00:35:35.640 BaseBdev3 00:35:35.640 BaseBdev4' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.640 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.899 [2024-11-26 17:32:36.381059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:35.899 [2024-11-26 17:32:36.381149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:35.899 [2024-11-26 17:32:36.381238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.899 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:35.899 "name": "Existed_Raid", 00:35:35.899 "uuid": "d412a738-7451-4a7d-bccf-51d393ee17e4", 00:35:35.899 "strip_size_kb": 64, 00:35:35.899 "state": "offline", 00:35:35.899 "raid_level": "raid0", 00:35:35.899 "superblock": false, 00:35:35.899 "num_base_bdevs": 4, 00:35:35.899 "num_base_bdevs_discovered": 3, 00:35:35.899 "num_base_bdevs_operational": 3, 00:35:35.899 "base_bdevs_list": [ 00:35:35.899 { 00:35:35.899 "name": null, 00:35:35.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.900 "is_configured": false, 00:35:35.900 "data_offset": 0, 00:35:35.900 "data_size": 65536 00:35:35.900 }, 00:35:35.900 { 00:35:35.900 "name": "BaseBdev2", 00:35:35.900 "uuid": "88287bd8-32b1-4230-8101-1921f689ed8e", 00:35:35.900 "is_configured": true, 00:35:35.900 "data_offset": 0, 00:35:35.900 "data_size": 65536 00:35:35.900 }, 00:35:35.900 { 00:35:35.900 "name": "BaseBdev3", 00:35:35.900 "uuid": "aa6621ba-e3f4-4dc2-bf9b-a5acf863f238", 00:35:35.900 "is_configured": true, 00:35:35.900 "data_offset": 0, 00:35:35.900 "data_size": 65536 00:35:35.900 }, 00:35:35.900 { 00:35:35.900 "name": "BaseBdev4", 00:35:35.900 "uuid": "46c4cfed-d815-4849-9925-ba4a46262775", 00:35:35.900 "is_configured": true, 00:35:35.900 "data_offset": 0, 00:35:35.900 "data_size": 65536 00:35:35.900 } 00:35:35.900 ] 00:35:35.900 }' 00:35:35.900 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:35.900 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.468 17:32:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.468 [2024-11-26 17:32:36.988775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.468 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.468 [2024-11-26 17:32:37.148518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.727 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.727 [2024-11-26 17:32:37.313287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:36.727 [2024-11-26 17:32:37.313414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.986 BaseBdev2 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.986 [ 00:35:36.986 { 00:35:36.986 "name": "BaseBdev2", 00:35:36.986 "aliases": [ 00:35:36.986 "5c8a0883-d437-47ba-aa49-7ba039129e70" 00:35:36.986 ], 00:35:36.986 "product_name": "Malloc disk", 00:35:36.986 "block_size": 512, 00:35:36.986 "num_blocks": 65536, 00:35:36.986 "uuid": "5c8a0883-d437-47ba-aa49-7ba039129e70", 00:35:36.986 "assigned_rate_limits": { 00:35:36.986 "rw_ios_per_sec": 0, 00:35:36.986 "rw_mbytes_per_sec": 0, 00:35:36.986 "r_mbytes_per_sec": 0, 00:35:36.986 "w_mbytes_per_sec": 0 00:35:36.986 }, 00:35:36.986 "claimed": false, 00:35:36.986 "zoned": false, 00:35:36.986 "supported_io_types": { 00:35:36.986 "read": true, 00:35:36.986 "write": true, 00:35:36.986 "unmap": true, 00:35:36.986 "flush": true, 00:35:36.986 "reset": true, 00:35:36.986 "nvme_admin": false, 00:35:36.986 "nvme_io": false, 00:35:36.986 "nvme_io_md": false, 00:35:36.986 "write_zeroes": true, 00:35:36.986 "zcopy": true, 00:35:36.986 "get_zone_info": false, 00:35:36.986 "zone_management": false, 00:35:36.986 "zone_append": false, 00:35:36.986 "compare": false, 00:35:36.986 "compare_and_write": false, 00:35:36.986 "abort": true, 00:35:36.986 "seek_hole": false, 00:35:36.986 "seek_data": false, 00:35:36.986 "copy": true, 00:35:36.986 "nvme_iov_md": false 00:35:36.986 }, 00:35:36.986 "memory_domains": [ 00:35:36.986 { 00:35:36.986 "dma_device_id": "system", 00:35:36.986 "dma_device_type": 1 00:35:36.986 }, 00:35:36.986 { 00:35:36.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:36.986 "dma_device_type": 2 00:35:36.986 } 00:35:36.986 ], 00:35:36.986 "driver_specific": {} 00:35:36.986 } 00:35:36.986 ] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.986 BaseBdev3 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.986 [ 00:35:36.986 { 00:35:36.986 "name": "BaseBdev3", 00:35:36.986 "aliases": [ 00:35:36.986 "61338d4b-f9f8-4d57-8677-d70061df44fd" 00:35:36.986 ], 00:35:36.986 "product_name": "Malloc disk", 00:35:36.986 "block_size": 512, 00:35:36.986 "num_blocks": 65536, 00:35:36.986 "uuid": "61338d4b-f9f8-4d57-8677-d70061df44fd", 00:35:36.986 "assigned_rate_limits": { 00:35:36.986 "rw_ios_per_sec": 0, 00:35:36.986 "rw_mbytes_per_sec": 0, 00:35:36.986 "r_mbytes_per_sec": 0, 00:35:36.986 "w_mbytes_per_sec": 0 00:35:36.986 }, 00:35:36.986 "claimed": false, 00:35:36.986 "zoned": false, 00:35:36.986 "supported_io_types": { 00:35:36.986 "read": true, 00:35:36.986 "write": true, 00:35:36.986 "unmap": true, 00:35:36.986 "flush": true, 00:35:36.986 "reset": true, 00:35:36.986 "nvme_admin": false, 00:35:36.986 "nvme_io": false, 00:35:36.986 "nvme_io_md": false, 00:35:36.986 "write_zeroes": true, 00:35:36.986 "zcopy": true, 00:35:36.986 "get_zone_info": false, 00:35:36.986 "zone_management": false, 00:35:36.986 "zone_append": false, 00:35:36.986 "compare": false, 00:35:36.986 "compare_and_write": false, 00:35:36.986 "abort": true, 00:35:36.986 "seek_hole": false, 00:35:36.986 "seek_data": false, 00:35:36.986 "copy": true, 00:35:36.986 "nvme_iov_md": false 00:35:36.986 }, 00:35:36.986 "memory_domains": [ 00:35:36.986 { 00:35:36.986 "dma_device_id": "system", 00:35:36.986 "dma_device_type": 1 00:35:36.986 }, 00:35:36.986 { 00:35:36.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:36.986 "dma_device_type": 2 00:35:36.986 } 00:35:36.986 ], 00:35:36.986 "driver_specific": {} 00:35:36.986 } 00:35:36.986 ] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.986 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.245 BaseBdev4 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.245 [ 00:35:37.245 { 00:35:37.245 "name": "BaseBdev4", 00:35:37.245 "aliases": [ 00:35:37.245 "1ffa1635-55e6-4a59-95f0-bc1f7dcab930" 00:35:37.245 ], 00:35:37.245 "product_name": "Malloc disk", 00:35:37.245 "block_size": 512, 00:35:37.245 "num_blocks": 65536, 00:35:37.245 "uuid": "1ffa1635-55e6-4a59-95f0-bc1f7dcab930", 00:35:37.245 "assigned_rate_limits": { 00:35:37.245 "rw_ios_per_sec": 0, 00:35:37.245 "rw_mbytes_per_sec": 0, 00:35:37.245 "r_mbytes_per_sec": 0, 00:35:37.245 "w_mbytes_per_sec": 0 00:35:37.245 }, 00:35:37.245 "claimed": false, 00:35:37.245 "zoned": false, 00:35:37.245 "supported_io_types": { 00:35:37.245 "read": true, 00:35:37.245 "write": true, 00:35:37.245 "unmap": true, 00:35:37.245 "flush": true, 00:35:37.245 "reset": true, 00:35:37.245 "nvme_admin": false, 00:35:37.245 "nvme_io": false, 00:35:37.245 "nvme_io_md": false, 00:35:37.245 "write_zeroes": true, 00:35:37.245 "zcopy": true, 00:35:37.245 "get_zone_info": false, 00:35:37.245 "zone_management": false, 00:35:37.245 "zone_append": false, 00:35:37.245 "compare": false, 00:35:37.245 "compare_and_write": false, 00:35:37.245 "abort": true, 00:35:37.245 "seek_hole": false, 00:35:37.245 "seek_data": false, 00:35:37.245 "copy": true, 00:35:37.245 "nvme_iov_md": false 00:35:37.245 }, 00:35:37.245 "memory_domains": [ 00:35:37.245 { 00:35:37.245 "dma_device_id": "system", 00:35:37.245 "dma_device_type": 1 00:35:37.245 }, 00:35:37.245 { 00:35:37.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:37.245 "dma_device_type": 2 00:35:37.245 } 00:35:37.245 ], 00:35:37.245 "driver_specific": {} 00:35:37.245 } 00:35:37.245 ] 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.245 [2024-11-26 17:32:37.754453] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:37.245 [2024-11-26 17:32:37.754606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:37.245 [2024-11-26 17:32:37.754676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:37.245 [2024-11-26 17:32:37.757153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:37.245 [2024-11-26 17:32:37.757280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.245 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.246 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.246 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:37.246 "name": "Existed_Raid", 00:35:37.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.246 "strip_size_kb": 64, 00:35:37.246 "state": "configuring", 00:35:37.246 "raid_level": "raid0", 00:35:37.246 "superblock": false, 00:35:37.246 "num_base_bdevs": 4, 00:35:37.246 "num_base_bdevs_discovered": 3, 00:35:37.246 "num_base_bdevs_operational": 4, 00:35:37.246 "base_bdevs_list": [ 00:35:37.246 { 00:35:37.246 "name": "BaseBdev1", 00:35:37.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.246 "is_configured": false, 00:35:37.246 "data_offset": 0, 00:35:37.246 "data_size": 0 00:35:37.246 }, 00:35:37.246 { 00:35:37.246 "name": "BaseBdev2", 00:35:37.246 "uuid": "5c8a0883-d437-47ba-aa49-7ba039129e70", 00:35:37.246 "is_configured": true, 00:35:37.246 "data_offset": 0, 00:35:37.246 "data_size": 65536 00:35:37.246 }, 00:35:37.246 { 00:35:37.246 "name": "BaseBdev3", 00:35:37.246 "uuid": "61338d4b-f9f8-4d57-8677-d70061df44fd", 00:35:37.246 "is_configured": true, 00:35:37.246 "data_offset": 0, 00:35:37.246 "data_size": 65536 00:35:37.246 }, 00:35:37.246 { 00:35:37.246 "name": "BaseBdev4", 00:35:37.246 "uuid": "1ffa1635-55e6-4a59-95f0-bc1f7dcab930", 00:35:37.246 "is_configured": true, 00:35:37.246 "data_offset": 0, 00:35:37.246 "data_size": 65536 00:35:37.246 } 00:35:37.246 ] 00:35:37.246 }' 00:35:37.246 17:32:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:37.246 17:32:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.506 [2024-11-26 17:32:38.189700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:37.506 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:37.769 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.769 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:37.769 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.769 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.769 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.769 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:37.769 "name": "Existed_Raid", 00:35:37.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.769 "strip_size_kb": 64, 00:35:37.769 "state": "configuring", 00:35:37.769 "raid_level": "raid0", 00:35:37.769 "superblock": false, 00:35:37.769 "num_base_bdevs": 4, 00:35:37.769 "num_base_bdevs_discovered": 2, 00:35:37.769 "num_base_bdevs_operational": 4, 00:35:37.769 "base_bdevs_list": [ 00:35:37.769 { 00:35:37.769 "name": "BaseBdev1", 00:35:37.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.769 "is_configured": false, 00:35:37.769 "data_offset": 0, 00:35:37.769 "data_size": 0 00:35:37.769 }, 00:35:37.769 { 00:35:37.769 "name": null, 00:35:37.769 "uuid": "5c8a0883-d437-47ba-aa49-7ba039129e70", 00:35:37.769 "is_configured": false, 00:35:37.769 "data_offset": 0, 00:35:37.769 "data_size": 65536 00:35:37.769 }, 00:35:37.769 { 00:35:37.769 "name": "BaseBdev3", 00:35:37.769 "uuid": "61338d4b-f9f8-4d57-8677-d70061df44fd", 00:35:37.769 "is_configured": true, 00:35:37.769 "data_offset": 0, 00:35:37.769 "data_size": 65536 00:35:37.769 }, 00:35:37.769 { 00:35:37.769 "name": "BaseBdev4", 00:35:37.769 "uuid": "1ffa1635-55e6-4a59-95f0-bc1f7dcab930", 00:35:37.769 "is_configured": true, 00:35:37.769 "data_offset": 0, 00:35:37.770 "data_size": 65536 00:35:37.770 } 00:35:37.770 ] 00:35:37.770 }' 00:35:37.770 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:37.770 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.028 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.028 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:38.028 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.028 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.029 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.029 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:35:38.029 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:38.029 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.029 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.288 [2024-11-26 17:32:38.748168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:38.288 BaseBdev1 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.288 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.288 [ 00:35:38.288 { 00:35:38.288 "name": "BaseBdev1", 00:35:38.288 "aliases": [ 00:35:38.288 "9478952d-0c9c-4b3c-ba76-e154599233df" 00:35:38.288 ], 00:35:38.288 "product_name": "Malloc disk", 00:35:38.288 "block_size": 512, 00:35:38.289 "num_blocks": 65536, 00:35:38.289 "uuid": "9478952d-0c9c-4b3c-ba76-e154599233df", 00:35:38.289 "assigned_rate_limits": { 00:35:38.289 "rw_ios_per_sec": 0, 00:35:38.289 "rw_mbytes_per_sec": 0, 00:35:38.289 "r_mbytes_per_sec": 0, 00:35:38.289 "w_mbytes_per_sec": 0 00:35:38.289 }, 00:35:38.289 "claimed": true, 00:35:38.289 "claim_type": "exclusive_write", 00:35:38.289 "zoned": false, 00:35:38.289 "supported_io_types": { 00:35:38.289 "read": true, 00:35:38.289 "write": true, 00:35:38.289 "unmap": true, 00:35:38.289 "flush": true, 00:35:38.289 "reset": true, 00:35:38.289 "nvme_admin": false, 00:35:38.289 "nvme_io": false, 00:35:38.289 "nvme_io_md": false, 00:35:38.289 "write_zeroes": true, 00:35:38.289 "zcopy": true, 00:35:38.289 "get_zone_info": false, 00:35:38.289 "zone_management": false, 00:35:38.289 "zone_append": false, 00:35:38.289 "compare": false, 00:35:38.289 "compare_and_write": false, 00:35:38.289 "abort": true, 00:35:38.289 "seek_hole": false, 00:35:38.289 "seek_data": false, 00:35:38.289 "copy": true, 00:35:38.289 "nvme_iov_md": false 00:35:38.289 }, 00:35:38.289 "memory_domains": [ 00:35:38.289 { 00:35:38.289 "dma_device_id": "system", 00:35:38.289 "dma_device_type": 1 00:35:38.289 }, 00:35:38.289 { 00:35:38.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:38.289 "dma_device_type": 2 00:35:38.289 } 00:35:38.289 ], 00:35:38.289 "driver_specific": {} 00:35:38.289 } 00:35:38.289 ] 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:38.289 "name": "Existed_Raid", 00:35:38.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.289 "strip_size_kb": 64, 00:35:38.289 "state": "configuring", 00:35:38.289 "raid_level": "raid0", 00:35:38.289 "superblock": false, 00:35:38.289 "num_base_bdevs": 4, 00:35:38.289 "num_base_bdevs_discovered": 3, 00:35:38.289 "num_base_bdevs_operational": 4, 00:35:38.289 "base_bdevs_list": [ 00:35:38.289 { 00:35:38.289 "name": "BaseBdev1", 00:35:38.289 "uuid": "9478952d-0c9c-4b3c-ba76-e154599233df", 00:35:38.289 "is_configured": true, 00:35:38.289 "data_offset": 0, 00:35:38.289 "data_size": 65536 00:35:38.289 }, 00:35:38.289 { 00:35:38.289 "name": null, 00:35:38.289 "uuid": "5c8a0883-d437-47ba-aa49-7ba039129e70", 00:35:38.289 "is_configured": false, 00:35:38.289 "data_offset": 0, 00:35:38.289 "data_size": 65536 00:35:38.289 }, 00:35:38.289 { 00:35:38.289 "name": "BaseBdev3", 00:35:38.289 "uuid": "61338d4b-f9f8-4d57-8677-d70061df44fd", 00:35:38.289 "is_configured": true, 00:35:38.289 "data_offset": 0, 00:35:38.289 "data_size": 65536 00:35:38.289 }, 00:35:38.289 { 00:35:38.289 "name": "BaseBdev4", 00:35:38.289 "uuid": "1ffa1635-55e6-4a59-95f0-bc1f7dcab930", 00:35:38.289 "is_configured": true, 00:35:38.289 "data_offset": 0, 00:35:38.289 "data_size": 65536 00:35:38.289 } 00:35:38.289 ] 00:35:38.289 }' 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:38.289 17:32:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.856 [2024-11-26 17:32:39.315658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:38.856 "name": "Existed_Raid", 00:35:38.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.856 "strip_size_kb": 64, 00:35:38.856 "state": "configuring", 00:35:38.856 "raid_level": "raid0", 00:35:38.856 "superblock": false, 00:35:38.856 "num_base_bdevs": 4, 00:35:38.856 "num_base_bdevs_discovered": 2, 00:35:38.856 "num_base_bdevs_operational": 4, 00:35:38.856 "base_bdevs_list": [ 00:35:38.856 { 00:35:38.856 "name": "BaseBdev1", 00:35:38.856 "uuid": "9478952d-0c9c-4b3c-ba76-e154599233df", 00:35:38.856 "is_configured": true, 00:35:38.856 "data_offset": 0, 00:35:38.856 "data_size": 65536 00:35:38.856 }, 00:35:38.856 { 00:35:38.856 "name": null, 00:35:38.856 "uuid": "5c8a0883-d437-47ba-aa49-7ba039129e70", 00:35:38.856 "is_configured": false, 00:35:38.856 "data_offset": 0, 00:35:38.856 "data_size": 65536 00:35:38.856 }, 00:35:38.856 { 00:35:38.856 "name": null, 00:35:38.856 "uuid": "61338d4b-f9f8-4d57-8677-d70061df44fd", 00:35:38.856 "is_configured": false, 00:35:38.856 "data_offset": 0, 00:35:38.856 "data_size": 65536 00:35:38.856 }, 00:35:38.856 { 00:35:38.856 "name": "BaseBdev4", 00:35:38.856 "uuid": "1ffa1635-55e6-4a59-95f0-bc1f7dcab930", 00:35:38.856 "is_configured": true, 00:35:38.856 "data_offset": 0, 00:35:38.856 "data_size": 65536 00:35:38.856 } 00:35:38.856 ] 00:35:38.856 }' 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:38.856 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.114 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:39.114 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.114 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.114 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.114 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.374 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:35:39.374 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:39.374 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.374 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.374 [2024-11-26 17:32:39.814706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:39.374 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:39.375 "name": "Existed_Raid", 00:35:39.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.375 "strip_size_kb": 64, 00:35:39.375 "state": "configuring", 00:35:39.375 "raid_level": "raid0", 00:35:39.375 "superblock": false, 00:35:39.375 "num_base_bdevs": 4, 00:35:39.375 "num_base_bdevs_discovered": 3, 00:35:39.375 "num_base_bdevs_operational": 4, 00:35:39.375 "base_bdevs_list": [ 00:35:39.375 { 00:35:39.375 "name": "BaseBdev1", 00:35:39.375 "uuid": "9478952d-0c9c-4b3c-ba76-e154599233df", 00:35:39.375 "is_configured": true, 00:35:39.375 "data_offset": 0, 00:35:39.375 "data_size": 65536 00:35:39.375 }, 00:35:39.375 { 00:35:39.375 "name": null, 00:35:39.375 "uuid": "5c8a0883-d437-47ba-aa49-7ba039129e70", 00:35:39.375 "is_configured": false, 00:35:39.375 "data_offset": 0, 00:35:39.375 "data_size": 65536 00:35:39.375 }, 00:35:39.375 { 00:35:39.375 "name": "BaseBdev3", 00:35:39.375 "uuid": "61338d4b-f9f8-4d57-8677-d70061df44fd", 00:35:39.375 "is_configured": true, 00:35:39.375 "data_offset": 0, 00:35:39.375 "data_size": 65536 00:35:39.375 }, 00:35:39.375 { 00:35:39.375 "name": "BaseBdev4", 00:35:39.375 "uuid": "1ffa1635-55e6-4a59-95f0-bc1f7dcab930", 00:35:39.375 "is_configured": true, 00:35:39.375 "data_offset": 0, 00:35:39.375 "data_size": 65536 00:35:39.375 } 00:35:39.375 ] 00:35:39.375 }' 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:39.375 17:32:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.634 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.634 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:39.634 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.634 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.634 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.893 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:35:39.893 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:39.893 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.893 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.893 [2024-11-26 17:32:40.337902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:39.893 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.893 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:39.893 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:39.893 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:39.893 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:39.893 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:39.894 "name": "Existed_Raid", 00:35:39.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.894 "strip_size_kb": 64, 00:35:39.894 "state": "configuring", 00:35:39.894 "raid_level": "raid0", 00:35:39.894 "superblock": false, 00:35:39.894 "num_base_bdevs": 4, 00:35:39.894 "num_base_bdevs_discovered": 2, 00:35:39.894 "num_base_bdevs_operational": 4, 00:35:39.894 "base_bdevs_list": [ 00:35:39.894 { 00:35:39.894 "name": null, 00:35:39.894 "uuid": "9478952d-0c9c-4b3c-ba76-e154599233df", 00:35:39.894 "is_configured": false, 00:35:39.894 "data_offset": 0, 00:35:39.894 "data_size": 65536 00:35:39.894 }, 00:35:39.894 { 00:35:39.894 "name": null, 00:35:39.894 "uuid": "5c8a0883-d437-47ba-aa49-7ba039129e70", 00:35:39.894 "is_configured": false, 00:35:39.894 "data_offset": 0, 00:35:39.894 "data_size": 65536 00:35:39.894 }, 00:35:39.894 { 00:35:39.894 "name": "BaseBdev3", 00:35:39.894 "uuid": "61338d4b-f9f8-4d57-8677-d70061df44fd", 00:35:39.894 "is_configured": true, 00:35:39.894 "data_offset": 0, 00:35:39.894 "data_size": 65536 00:35:39.894 }, 00:35:39.894 { 00:35:39.894 "name": "BaseBdev4", 00:35:39.894 "uuid": "1ffa1635-55e6-4a59-95f0-bc1f7dcab930", 00:35:39.894 "is_configured": true, 00:35:39.894 "data_offset": 0, 00:35:39.894 "data_size": 65536 00:35:39.894 } 00:35:39.894 ] 00:35:39.894 }' 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:39.894 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.464 [2024-11-26 17:32:40.978232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.464 17:32:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:40.464 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.464 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:40.464 "name": "Existed_Raid", 00:35:40.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:40.465 "strip_size_kb": 64, 00:35:40.465 "state": "configuring", 00:35:40.465 "raid_level": "raid0", 00:35:40.465 "superblock": false, 00:35:40.465 "num_base_bdevs": 4, 00:35:40.465 "num_base_bdevs_discovered": 3, 00:35:40.465 "num_base_bdevs_operational": 4, 00:35:40.465 "base_bdevs_list": [ 00:35:40.465 { 00:35:40.465 "name": null, 00:35:40.465 "uuid": "9478952d-0c9c-4b3c-ba76-e154599233df", 00:35:40.465 "is_configured": false, 00:35:40.465 "data_offset": 0, 00:35:40.465 "data_size": 65536 00:35:40.465 }, 00:35:40.465 { 00:35:40.465 "name": "BaseBdev2", 00:35:40.465 "uuid": "5c8a0883-d437-47ba-aa49-7ba039129e70", 00:35:40.465 "is_configured": true, 00:35:40.465 "data_offset": 0, 00:35:40.465 "data_size": 65536 00:35:40.465 }, 00:35:40.465 { 00:35:40.465 "name": "BaseBdev3", 00:35:40.465 "uuid": "61338d4b-f9f8-4d57-8677-d70061df44fd", 00:35:40.465 "is_configured": true, 00:35:40.465 "data_offset": 0, 00:35:40.465 "data_size": 65536 00:35:40.465 }, 00:35:40.465 { 00:35:40.465 "name": "BaseBdev4", 00:35:40.465 "uuid": "1ffa1635-55e6-4a59-95f0-bc1f7dcab930", 00:35:40.465 "is_configured": true, 00:35:40.465 "data_offset": 0, 00:35:40.465 "data_size": 65536 00:35:40.465 } 00:35:40.465 ] 00:35:40.465 }' 00:35:40.465 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:40.465 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9478952d-0c9c-4b3c-ba76-e154599233df 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.046 [2024-11-26 17:32:41.574545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:41.046 [2024-11-26 17:32:41.574595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:41.046 [2024-11-26 17:32:41.574604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:35:41.046 [2024-11-26 17:32:41.574893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:41.046 [2024-11-26 17:32:41.575056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:41.046 [2024-11-26 17:32:41.575068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:35:41.046 [2024-11-26 17:32:41.575366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:41.046 NewBaseBdev 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.046 [ 00:35:41.046 { 00:35:41.046 "name": "NewBaseBdev", 00:35:41.046 "aliases": [ 00:35:41.046 "9478952d-0c9c-4b3c-ba76-e154599233df" 00:35:41.046 ], 00:35:41.046 "product_name": "Malloc disk", 00:35:41.046 "block_size": 512, 00:35:41.046 "num_blocks": 65536, 00:35:41.046 "uuid": "9478952d-0c9c-4b3c-ba76-e154599233df", 00:35:41.046 "assigned_rate_limits": { 00:35:41.046 "rw_ios_per_sec": 0, 00:35:41.046 "rw_mbytes_per_sec": 0, 00:35:41.046 "r_mbytes_per_sec": 0, 00:35:41.046 "w_mbytes_per_sec": 0 00:35:41.046 }, 00:35:41.046 "claimed": true, 00:35:41.046 "claim_type": "exclusive_write", 00:35:41.046 "zoned": false, 00:35:41.046 "supported_io_types": { 00:35:41.046 "read": true, 00:35:41.046 "write": true, 00:35:41.046 "unmap": true, 00:35:41.046 "flush": true, 00:35:41.046 "reset": true, 00:35:41.046 "nvme_admin": false, 00:35:41.046 "nvme_io": false, 00:35:41.046 "nvme_io_md": false, 00:35:41.046 "write_zeroes": true, 00:35:41.046 "zcopy": true, 00:35:41.046 "get_zone_info": false, 00:35:41.046 "zone_management": false, 00:35:41.046 "zone_append": false, 00:35:41.046 "compare": false, 00:35:41.046 "compare_and_write": false, 00:35:41.046 "abort": true, 00:35:41.046 "seek_hole": false, 00:35:41.046 "seek_data": false, 00:35:41.046 "copy": true, 00:35:41.046 "nvme_iov_md": false 00:35:41.046 }, 00:35:41.046 "memory_domains": [ 00:35:41.046 { 00:35:41.046 "dma_device_id": "system", 00:35:41.046 "dma_device_type": 1 00:35:41.046 }, 00:35:41.046 { 00:35:41.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:41.046 "dma_device_type": 2 00:35:41.046 } 00:35:41.046 ], 00:35:41.046 "driver_specific": {} 00:35:41.046 } 00:35:41.046 ] 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:41.046 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:41.047 "name": "Existed_Raid", 00:35:41.047 "uuid": "ca6345d5-69f5-4cb5-b25b-45bb6c3f2b3a", 00:35:41.047 "strip_size_kb": 64, 00:35:41.047 "state": "online", 00:35:41.047 "raid_level": "raid0", 00:35:41.047 "superblock": false, 00:35:41.047 "num_base_bdevs": 4, 00:35:41.047 "num_base_bdevs_discovered": 4, 00:35:41.047 "num_base_bdevs_operational": 4, 00:35:41.047 "base_bdevs_list": [ 00:35:41.047 { 00:35:41.047 "name": "NewBaseBdev", 00:35:41.047 "uuid": "9478952d-0c9c-4b3c-ba76-e154599233df", 00:35:41.047 "is_configured": true, 00:35:41.047 "data_offset": 0, 00:35:41.047 "data_size": 65536 00:35:41.047 }, 00:35:41.047 { 00:35:41.047 "name": "BaseBdev2", 00:35:41.047 "uuid": "5c8a0883-d437-47ba-aa49-7ba039129e70", 00:35:41.047 "is_configured": true, 00:35:41.047 "data_offset": 0, 00:35:41.047 "data_size": 65536 00:35:41.047 }, 00:35:41.047 { 00:35:41.047 "name": "BaseBdev3", 00:35:41.047 "uuid": "61338d4b-f9f8-4d57-8677-d70061df44fd", 00:35:41.047 "is_configured": true, 00:35:41.047 "data_offset": 0, 00:35:41.047 "data_size": 65536 00:35:41.047 }, 00:35:41.047 { 00:35:41.047 "name": "BaseBdev4", 00:35:41.047 "uuid": "1ffa1635-55e6-4a59-95f0-bc1f7dcab930", 00:35:41.047 "is_configured": true, 00:35:41.047 "data_offset": 0, 00:35:41.047 "data_size": 65536 00:35:41.047 } 00:35:41.047 ] 00:35:41.047 }' 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:41.047 17:32:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:41.630 [2024-11-26 17:32:42.074124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:41.630 "name": "Existed_Raid", 00:35:41.630 "aliases": [ 00:35:41.630 "ca6345d5-69f5-4cb5-b25b-45bb6c3f2b3a" 00:35:41.630 ], 00:35:41.630 "product_name": "Raid Volume", 00:35:41.630 "block_size": 512, 00:35:41.630 "num_blocks": 262144, 00:35:41.630 "uuid": "ca6345d5-69f5-4cb5-b25b-45bb6c3f2b3a", 00:35:41.630 "assigned_rate_limits": { 00:35:41.630 "rw_ios_per_sec": 0, 00:35:41.630 "rw_mbytes_per_sec": 0, 00:35:41.630 "r_mbytes_per_sec": 0, 00:35:41.630 "w_mbytes_per_sec": 0 00:35:41.630 }, 00:35:41.630 "claimed": false, 00:35:41.630 "zoned": false, 00:35:41.630 "supported_io_types": { 00:35:41.630 "read": true, 00:35:41.630 "write": true, 00:35:41.630 "unmap": true, 00:35:41.630 "flush": true, 00:35:41.630 "reset": true, 00:35:41.630 "nvme_admin": false, 00:35:41.630 "nvme_io": false, 00:35:41.630 "nvme_io_md": false, 00:35:41.630 "write_zeroes": true, 00:35:41.630 "zcopy": false, 00:35:41.630 "get_zone_info": false, 00:35:41.630 "zone_management": false, 00:35:41.630 "zone_append": false, 00:35:41.630 "compare": false, 00:35:41.630 "compare_and_write": false, 00:35:41.630 "abort": false, 00:35:41.630 "seek_hole": false, 00:35:41.630 "seek_data": false, 00:35:41.630 "copy": false, 00:35:41.630 "nvme_iov_md": false 00:35:41.630 }, 00:35:41.630 "memory_domains": [ 00:35:41.630 { 00:35:41.630 "dma_device_id": "system", 00:35:41.630 "dma_device_type": 1 00:35:41.630 }, 00:35:41.630 { 00:35:41.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:41.630 "dma_device_type": 2 00:35:41.630 }, 00:35:41.630 { 00:35:41.630 "dma_device_id": "system", 00:35:41.630 "dma_device_type": 1 00:35:41.630 }, 00:35:41.630 { 00:35:41.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:41.630 "dma_device_type": 2 00:35:41.630 }, 00:35:41.630 { 00:35:41.630 "dma_device_id": "system", 00:35:41.630 "dma_device_type": 1 00:35:41.630 }, 00:35:41.630 { 00:35:41.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:41.630 "dma_device_type": 2 00:35:41.630 }, 00:35:41.630 { 00:35:41.630 "dma_device_id": "system", 00:35:41.630 "dma_device_type": 1 00:35:41.630 }, 00:35:41.630 { 00:35:41.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:41.630 "dma_device_type": 2 00:35:41.630 } 00:35:41.630 ], 00:35:41.630 "driver_specific": { 00:35:41.630 "raid": { 00:35:41.630 "uuid": "ca6345d5-69f5-4cb5-b25b-45bb6c3f2b3a", 00:35:41.630 "strip_size_kb": 64, 00:35:41.630 "state": "online", 00:35:41.630 "raid_level": "raid0", 00:35:41.630 "superblock": false, 00:35:41.630 "num_base_bdevs": 4, 00:35:41.630 "num_base_bdevs_discovered": 4, 00:35:41.630 "num_base_bdevs_operational": 4, 00:35:41.630 "base_bdevs_list": [ 00:35:41.630 { 00:35:41.630 "name": "NewBaseBdev", 00:35:41.630 "uuid": "9478952d-0c9c-4b3c-ba76-e154599233df", 00:35:41.630 "is_configured": true, 00:35:41.630 "data_offset": 0, 00:35:41.630 "data_size": 65536 00:35:41.630 }, 00:35:41.630 { 00:35:41.630 "name": "BaseBdev2", 00:35:41.630 "uuid": "5c8a0883-d437-47ba-aa49-7ba039129e70", 00:35:41.630 "is_configured": true, 00:35:41.630 "data_offset": 0, 00:35:41.630 "data_size": 65536 00:35:41.630 }, 00:35:41.630 { 00:35:41.630 "name": "BaseBdev3", 00:35:41.630 "uuid": "61338d4b-f9f8-4d57-8677-d70061df44fd", 00:35:41.630 "is_configured": true, 00:35:41.630 "data_offset": 0, 00:35:41.630 "data_size": 65536 00:35:41.630 }, 00:35:41.630 { 00:35:41.630 "name": "BaseBdev4", 00:35:41.630 "uuid": "1ffa1635-55e6-4a59-95f0-bc1f7dcab930", 00:35:41.630 "is_configured": true, 00:35:41.630 "data_offset": 0, 00:35:41.630 "data_size": 65536 00:35:41.630 } 00:35:41.630 ] 00:35:41.630 } 00:35:41.630 } 00:35:41.630 }' 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:41.630 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:35:41.630 BaseBdev2 00:35:41.630 BaseBdev3 00:35:41.630 BaseBdev4' 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.631 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.891 [2024-11-26 17:32:42.389239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:41.891 [2024-11-26 17:32:42.389357] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:41.891 [2024-11-26 17:32:42.389461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:41.891 [2024-11-26 17:32:42.389550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:41.891 [2024-11-26 17:32:42.389562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69632 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69632 ']' 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69632 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69632 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:41.891 killing process with pid 69632 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69632' 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69632 00:35:41.891 [2024-11-26 17:32:42.438045] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:41.891 17:32:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69632 00:35:42.460 [2024-11-26 17:32:42.866771] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:35:43.840 00:35:43.840 real 0m12.048s 00:35:43.840 user 0m19.013s 00:35:43.840 sys 0m2.115s 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.840 ************************************ 00:35:43.840 END TEST raid_state_function_test 00:35:43.840 ************************************ 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:43.840 17:32:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:35:43.840 17:32:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:43.840 17:32:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:43.840 17:32:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:43.840 ************************************ 00:35:43.840 START TEST raid_state_function_test_sb 00:35:43.840 ************************************ 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:35:43.840 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70313 00:35:43.841 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:43.841 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70313' 00:35:43.841 Process raid pid: 70313 00:35:43.841 17:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70313 00:35:43.841 17:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70313 ']' 00:35:43.841 17:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.841 17:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:43.841 17:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.841 17:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:43.841 17:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:43.841 [2024-11-26 17:32:44.275596] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:43.841 [2024-11-26 17:32:44.275809] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:43.841 [2024-11-26 17:32:44.452575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.099 [2024-11-26 17:32:44.586234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.357 [2024-11-26 17:32:44.817297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:44.357 [2024-11-26 17:32:44.817451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:44.615 [2024-11-26 17:32:45.177552] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:44.615 [2024-11-26 17:32:45.177648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:44.615 [2024-11-26 17:32:45.177668] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:44.615 [2024-11-26 17:32:45.177685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:44.615 [2024-11-26 17:32:45.177697] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:44.615 [2024-11-26 17:32:45.177712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:44.615 [2024-11-26 17:32:45.177723] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:44.615 [2024-11-26 17:32:45.177738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:44.615 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:44.616 "name": "Existed_Raid", 00:35:44.616 "uuid": "220bbc0e-8c95-4a3c-a6fd-ff4990dda06d", 00:35:44.616 "strip_size_kb": 64, 00:35:44.616 "state": "configuring", 00:35:44.616 "raid_level": "raid0", 00:35:44.616 "superblock": true, 00:35:44.616 "num_base_bdevs": 4, 00:35:44.616 "num_base_bdevs_discovered": 0, 00:35:44.616 "num_base_bdevs_operational": 4, 00:35:44.616 "base_bdevs_list": [ 00:35:44.616 { 00:35:44.616 "name": "BaseBdev1", 00:35:44.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.616 "is_configured": false, 00:35:44.616 "data_offset": 0, 00:35:44.616 "data_size": 0 00:35:44.616 }, 00:35:44.616 { 00:35:44.616 "name": "BaseBdev2", 00:35:44.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.616 "is_configured": false, 00:35:44.616 "data_offset": 0, 00:35:44.616 "data_size": 0 00:35:44.616 }, 00:35:44.616 { 00:35:44.616 "name": "BaseBdev3", 00:35:44.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.616 "is_configured": false, 00:35:44.616 "data_offset": 0, 00:35:44.616 "data_size": 0 00:35:44.616 }, 00:35:44.616 { 00:35:44.616 "name": "BaseBdev4", 00:35:44.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.616 "is_configured": false, 00:35:44.616 "data_offset": 0, 00:35:44.616 "data_size": 0 00:35:44.616 } 00:35:44.616 ] 00:35:44.616 }' 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:44.616 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 [2024-11-26 17:32:45.668552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:45.214 [2024-11-26 17:32:45.668671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 [2024-11-26 17:32:45.680559] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:45.214 [2024-11-26 17:32:45.680678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:45.214 [2024-11-26 17:32:45.680715] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:45.214 [2024-11-26 17:32:45.680755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:45.214 [2024-11-26 17:32:45.680840] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:45.214 [2024-11-26 17:32:45.680876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:45.214 [2024-11-26 17:32:45.680910] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:45.214 [2024-11-26 17:32:45.680937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 [2024-11-26 17:32:45.735168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:45.214 BaseBdev1 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 [ 00:35:45.214 { 00:35:45.214 "name": "BaseBdev1", 00:35:45.214 "aliases": [ 00:35:45.214 "754e21bd-adca-4734-ad8e-96bb6c85f7eb" 00:35:45.214 ], 00:35:45.214 "product_name": "Malloc disk", 00:35:45.214 "block_size": 512, 00:35:45.214 "num_blocks": 65536, 00:35:45.214 "uuid": "754e21bd-adca-4734-ad8e-96bb6c85f7eb", 00:35:45.214 "assigned_rate_limits": { 00:35:45.214 "rw_ios_per_sec": 0, 00:35:45.214 "rw_mbytes_per_sec": 0, 00:35:45.214 "r_mbytes_per_sec": 0, 00:35:45.214 "w_mbytes_per_sec": 0 00:35:45.214 }, 00:35:45.214 "claimed": true, 00:35:45.214 "claim_type": "exclusive_write", 00:35:45.214 "zoned": false, 00:35:45.214 "supported_io_types": { 00:35:45.214 "read": true, 00:35:45.214 "write": true, 00:35:45.214 "unmap": true, 00:35:45.214 "flush": true, 00:35:45.214 "reset": true, 00:35:45.214 "nvme_admin": false, 00:35:45.214 "nvme_io": false, 00:35:45.214 "nvme_io_md": false, 00:35:45.214 "write_zeroes": true, 00:35:45.214 "zcopy": true, 00:35:45.214 "get_zone_info": false, 00:35:45.214 "zone_management": false, 00:35:45.214 "zone_append": false, 00:35:45.214 "compare": false, 00:35:45.214 "compare_and_write": false, 00:35:45.214 "abort": true, 00:35:45.214 "seek_hole": false, 00:35:45.214 "seek_data": false, 00:35:45.214 "copy": true, 00:35:45.214 "nvme_iov_md": false 00:35:45.214 }, 00:35:45.214 "memory_domains": [ 00:35:45.214 { 00:35:45.214 "dma_device_id": "system", 00:35:45.214 "dma_device_type": 1 00:35:45.214 }, 00:35:45.214 { 00:35:45.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:45.214 "dma_device_type": 2 00:35:45.214 } 00:35:45.214 ], 00:35:45.214 "driver_specific": {} 00:35:45.214 } 00:35:45.214 ] 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.214 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:45.214 "name": "Existed_Raid", 00:35:45.214 "uuid": "540d52aa-3cbb-491e-8910-03d0ec388a4b", 00:35:45.214 "strip_size_kb": 64, 00:35:45.214 "state": "configuring", 00:35:45.214 "raid_level": "raid0", 00:35:45.214 "superblock": true, 00:35:45.214 "num_base_bdevs": 4, 00:35:45.214 "num_base_bdevs_discovered": 1, 00:35:45.214 "num_base_bdevs_operational": 4, 00:35:45.214 "base_bdevs_list": [ 00:35:45.214 { 00:35:45.214 "name": "BaseBdev1", 00:35:45.214 "uuid": "754e21bd-adca-4734-ad8e-96bb6c85f7eb", 00:35:45.214 "is_configured": true, 00:35:45.214 "data_offset": 2048, 00:35:45.214 "data_size": 63488 00:35:45.214 }, 00:35:45.214 { 00:35:45.214 "name": "BaseBdev2", 00:35:45.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.214 "is_configured": false, 00:35:45.214 "data_offset": 0, 00:35:45.215 "data_size": 0 00:35:45.215 }, 00:35:45.215 { 00:35:45.215 "name": "BaseBdev3", 00:35:45.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.215 "is_configured": false, 00:35:45.215 "data_offset": 0, 00:35:45.215 "data_size": 0 00:35:45.215 }, 00:35:45.215 { 00:35:45.215 "name": "BaseBdev4", 00:35:45.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.215 "is_configured": false, 00:35:45.215 "data_offset": 0, 00:35:45.215 "data_size": 0 00:35:45.215 } 00:35:45.215 ] 00:35:45.215 }' 00:35:45.215 17:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:45.215 17:32:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.781 [2024-11-26 17:32:46.218402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:45.781 [2024-11-26 17:32:46.218467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.781 [2024-11-26 17:32:46.230463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:45.781 [2024-11-26 17:32:46.232637] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:45.781 [2024-11-26 17:32:46.232741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:45.781 [2024-11-26 17:32:46.232759] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:45.781 [2024-11-26 17:32:46.232773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:45.781 [2024-11-26 17:32:46.232782] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:45.781 [2024-11-26 17:32:46.232791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.781 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:45.781 "name": "Existed_Raid", 00:35:45.781 "uuid": "a6f00f92-5f1f-40e2-a8dc-ce02a55ce366", 00:35:45.781 "strip_size_kb": 64, 00:35:45.781 "state": "configuring", 00:35:45.781 "raid_level": "raid0", 00:35:45.781 "superblock": true, 00:35:45.781 "num_base_bdevs": 4, 00:35:45.782 "num_base_bdevs_discovered": 1, 00:35:45.782 "num_base_bdevs_operational": 4, 00:35:45.782 "base_bdevs_list": [ 00:35:45.782 { 00:35:45.782 "name": "BaseBdev1", 00:35:45.782 "uuid": "754e21bd-adca-4734-ad8e-96bb6c85f7eb", 00:35:45.782 "is_configured": true, 00:35:45.782 "data_offset": 2048, 00:35:45.782 "data_size": 63488 00:35:45.782 }, 00:35:45.782 { 00:35:45.782 "name": "BaseBdev2", 00:35:45.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.782 "is_configured": false, 00:35:45.782 "data_offset": 0, 00:35:45.782 "data_size": 0 00:35:45.782 }, 00:35:45.782 { 00:35:45.782 "name": "BaseBdev3", 00:35:45.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.782 "is_configured": false, 00:35:45.782 "data_offset": 0, 00:35:45.782 "data_size": 0 00:35:45.782 }, 00:35:45.782 { 00:35:45.782 "name": "BaseBdev4", 00:35:45.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.782 "is_configured": false, 00:35:45.782 "data_offset": 0, 00:35:45.782 "data_size": 0 00:35:45.782 } 00:35:45.782 ] 00:35:45.782 }' 00:35:45.782 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:45.782 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.040 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:46.040 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.040 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.040 [2024-11-26 17:32:46.733367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:46.299 BaseBdev2 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.299 [ 00:35:46.299 { 00:35:46.299 "name": "BaseBdev2", 00:35:46.299 "aliases": [ 00:35:46.299 "9bc351cf-7af1-416d-a321-4c4eca125010" 00:35:46.299 ], 00:35:46.299 "product_name": "Malloc disk", 00:35:46.299 "block_size": 512, 00:35:46.299 "num_blocks": 65536, 00:35:46.299 "uuid": "9bc351cf-7af1-416d-a321-4c4eca125010", 00:35:46.299 "assigned_rate_limits": { 00:35:46.299 "rw_ios_per_sec": 0, 00:35:46.299 "rw_mbytes_per_sec": 0, 00:35:46.299 "r_mbytes_per_sec": 0, 00:35:46.299 "w_mbytes_per_sec": 0 00:35:46.299 }, 00:35:46.299 "claimed": true, 00:35:46.299 "claim_type": "exclusive_write", 00:35:46.299 "zoned": false, 00:35:46.299 "supported_io_types": { 00:35:46.299 "read": true, 00:35:46.299 "write": true, 00:35:46.299 "unmap": true, 00:35:46.299 "flush": true, 00:35:46.299 "reset": true, 00:35:46.299 "nvme_admin": false, 00:35:46.299 "nvme_io": false, 00:35:46.299 "nvme_io_md": false, 00:35:46.299 "write_zeroes": true, 00:35:46.299 "zcopy": true, 00:35:46.299 "get_zone_info": false, 00:35:46.299 "zone_management": false, 00:35:46.299 "zone_append": false, 00:35:46.299 "compare": false, 00:35:46.299 "compare_and_write": false, 00:35:46.299 "abort": true, 00:35:46.299 "seek_hole": false, 00:35:46.299 "seek_data": false, 00:35:46.299 "copy": true, 00:35:46.299 "nvme_iov_md": false 00:35:46.299 }, 00:35:46.299 "memory_domains": [ 00:35:46.299 { 00:35:46.299 "dma_device_id": "system", 00:35:46.299 "dma_device_type": 1 00:35:46.299 }, 00:35:46.299 { 00:35:46.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:46.299 "dma_device_type": 2 00:35:46.299 } 00:35:46.299 ], 00:35:46.299 "driver_specific": {} 00:35:46.299 } 00:35:46.299 ] 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:46.299 "name": "Existed_Raid", 00:35:46.299 "uuid": "a6f00f92-5f1f-40e2-a8dc-ce02a55ce366", 00:35:46.299 "strip_size_kb": 64, 00:35:46.299 "state": "configuring", 00:35:46.299 "raid_level": "raid0", 00:35:46.299 "superblock": true, 00:35:46.299 "num_base_bdevs": 4, 00:35:46.299 "num_base_bdevs_discovered": 2, 00:35:46.299 "num_base_bdevs_operational": 4, 00:35:46.299 "base_bdevs_list": [ 00:35:46.299 { 00:35:46.299 "name": "BaseBdev1", 00:35:46.299 "uuid": "754e21bd-adca-4734-ad8e-96bb6c85f7eb", 00:35:46.299 "is_configured": true, 00:35:46.299 "data_offset": 2048, 00:35:46.299 "data_size": 63488 00:35:46.299 }, 00:35:46.299 { 00:35:46.299 "name": "BaseBdev2", 00:35:46.299 "uuid": "9bc351cf-7af1-416d-a321-4c4eca125010", 00:35:46.299 "is_configured": true, 00:35:46.299 "data_offset": 2048, 00:35:46.299 "data_size": 63488 00:35:46.299 }, 00:35:46.299 { 00:35:46.299 "name": "BaseBdev3", 00:35:46.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.299 "is_configured": false, 00:35:46.299 "data_offset": 0, 00:35:46.299 "data_size": 0 00:35:46.299 }, 00:35:46.299 { 00:35:46.299 "name": "BaseBdev4", 00:35:46.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.299 "is_configured": false, 00:35:46.299 "data_offset": 0, 00:35:46.299 "data_size": 0 00:35:46.299 } 00:35:46.299 ] 00:35:46.299 }' 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:46.299 17:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.557 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:46.557 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.557 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.816 [2024-11-26 17:32:47.267944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:46.816 BaseBdev3 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.816 [ 00:35:46.816 { 00:35:46.816 "name": "BaseBdev3", 00:35:46.816 "aliases": [ 00:35:46.816 "154a859e-1602-43af-8302-abf4a0e11f62" 00:35:46.816 ], 00:35:46.816 "product_name": "Malloc disk", 00:35:46.816 "block_size": 512, 00:35:46.816 "num_blocks": 65536, 00:35:46.816 "uuid": "154a859e-1602-43af-8302-abf4a0e11f62", 00:35:46.816 "assigned_rate_limits": { 00:35:46.816 "rw_ios_per_sec": 0, 00:35:46.816 "rw_mbytes_per_sec": 0, 00:35:46.816 "r_mbytes_per_sec": 0, 00:35:46.816 "w_mbytes_per_sec": 0 00:35:46.816 }, 00:35:46.816 "claimed": true, 00:35:46.816 "claim_type": "exclusive_write", 00:35:46.816 "zoned": false, 00:35:46.816 "supported_io_types": { 00:35:46.816 "read": true, 00:35:46.816 "write": true, 00:35:46.816 "unmap": true, 00:35:46.816 "flush": true, 00:35:46.816 "reset": true, 00:35:46.816 "nvme_admin": false, 00:35:46.816 "nvme_io": false, 00:35:46.816 "nvme_io_md": false, 00:35:46.816 "write_zeroes": true, 00:35:46.816 "zcopy": true, 00:35:46.816 "get_zone_info": false, 00:35:46.816 "zone_management": false, 00:35:46.816 "zone_append": false, 00:35:46.816 "compare": false, 00:35:46.816 "compare_and_write": false, 00:35:46.816 "abort": true, 00:35:46.816 "seek_hole": false, 00:35:46.816 "seek_data": false, 00:35:46.816 "copy": true, 00:35:46.816 "nvme_iov_md": false 00:35:46.816 }, 00:35:46.816 "memory_domains": [ 00:35:46.816 { 00:35:46.816 "dma_device_id": "system", 00:35:46.816 "dma_device_type": 1 00:35:46.816 }, 00:35:46.816 { 00:35:46.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:46.816 "dma_device_type": 2 00:35:46.816 } 00:35:46.816 ], 00:35:46.816 "driver_specific": {} 00:35:46.816 } 00:35:46.816 ] 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:46.816 "name": "Existed_Raid", 00:35:46.816 "uuid": "a6f00f92-5f1f-40e2-a8dc-ce02a55ce366", 00:35:46.816 "strip_size_kb": 64, 00:35:46.816 "state": "configuring", 00:35:46.816 "raid_level": "raid0", 00:35:46.816 "superblock": true, 00:35:46.816 "num_base_bdevs": 4, 00:35:46.816 "num_base_bdevs_discovered": 3, 00:35:46.816 "num_base_bdevs_operational": 4, 00:35:46.816 "base_bdevs_list": [ 00:35:46.816 { 00:35:46.816 "name": "BaseBdev1", 00:35:46.816 "uuid": "754e21bd-adca-4734-ad8e-96bb6c85f7eb", 00:35:46.816 "is_configured": true, 00:35:46.816 "data_offset": 2048, 00:35:46.816 "data_size": 63488 00:35:46.816 }, 00:35:46.816 { 00:35:46.816 "name": "BaseBdev2", 00:35:46.816 "uuid": "9bc351cf-7af1-416d-a321-4c4eca125010", 00:35:46.816 "is_configured": true, 00:35:46.816 "data_offset": 2048, 00:35:46.816 "data_size": 63488 00:35:46.816 }, 00:35:46.816 { 00:35:46.816 "name": "BaseBdev3", 00:35:46.816 "uuid": "154a859e-1602-43af-8302-abf4a0e11f62", 00:35:46.816 "is_configured": true, 00:35:46.816 "data_offset": 2048, 00:35:46.816 "data_size": 63488 00:35:46.816 }, 00:35:46.816 { 00:35:46.816 "name": "BaseBdev4", 00:35:46.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.816 "is_configured": false, 00:35:46.816 "data_offset": 0, 00:35:46.816 "data_size": 0 00:35:46.816 } 00:35:46.816 ] 00:35:46.816 }' 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:46.816 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.074 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:47.074 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.074 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.333 [2024-11-26 17:32:47.773669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:47.333 [2024-11-26 17:32:47.773969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:47.333 [2024-11-26 17:32:47.773986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:47.333 [2024-11-26 17:32:47.774283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:47.333 BaseBdev4 00:35:47.333 [2024-11-26 17:32:47.774452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:47.333 [2024-11-26 17:32:47.774465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:47.333 [2024-11-26 17:32:47.774663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:47.333 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.333 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:35:47.333 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:47.333 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:47.333 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:47.333 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.334 [ 00:35:47.334 { 00:35:47.334 "name": "BaseBdev4", 00:35:47.334 "aliases": [ 00:35:47.334 "17fa84a4-a359-4555-bcaa-d832caf771b3" 00:35:47.334 ], 00:35:47.334 "product_name": "Malloc disk", 00:35:47.334 "block_size": 512, 00:35:47.334 "num_blocks": 65536, 00:35:47.334 "uuid": "17fa84a4-a359-4555-bcaa-d832caf771b3", 00:35:47.334 "assigned_rate_limits": { 00:35:47.334 "rw_ios_per_sec": 0, 00:35:47.334 "rw_mbytes_per_sec": 0, 00:35:47.334 "r_mbytes_per_sec": 0, 00:35:47.334 "w_mbytes_per_sec": 0 00:35:47.334 }, 00:35:47.334 "claimed": true, 00:35:47.334 "claim_type": "exclusive_write", 00:35:47.334 "zoned": false, 00:35:47.334 "supported_io_types": { 00:35:47.334 "read": true, 00:35:47.334 "write": true, 00:35:47.334 "unmap": true, 00:35:47.334 "flush": true, 00:35:47.334 "reset": true, 00:35:47.334 "nvme_admin": false, 00:35:47.334 "nvme_io": false, 00:35:47.334 "nvme_io_md": false, 00:35:47.334 "write_zeroes": true, 00:35:47.334 "zcopy": true, 00:35:47.334 "get_zone_info": false, 00:35:47.334 "zone_management": false, 00:35:47.334 "zone_append": false, 00:35:47.334 "compare": false, 00:35:47.334 "compare_and_write": false, 00:35:47.334 "abort": true, 00:35:47.334 "seek_hole": false, 00:35:47.334 "seek_data": false, 00:35:47.334 "copy": true, 00:35:47.334 "nvme_iov_md": false 00:35:47.334 }, 00:35:47.334 "memory_domains": [ 00:35:47.334 { 00:35:47.334 "dma_device_id": "system", 00:35:47.334 "dma_device_type": 1 00:35:47.334 }, 00:35:47.334 { 00:35:47.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.334 "dma_device_type": 2 00:35:47.334 } 00:35:47.334 ], 00:35:47.334 "driver_specific": {} 00:35:47.334 } 00:35:47.334 ] 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:47.334 "name": "Existed_Raid", 00:35:47.334 "uuid": "a6f00f92-5f1f-40e2-a8dc-ce02a55ce366", 00:35:47.334 "strip_size_kb": 64, 00:35:47.334 "state": "online", 00:35:47.334 "raid_level": "raid0", 00:35:47.334 "superblock": true, 00:35:47.334 "num_base_bdevs": 4, 00:35:47.334 "num_base_bdevs_discovered": 4, 00:35:47.334 "num_base_bdevs_operational": 4, 00:35:47.334 "base_bdevs_list": [ 00:35:47.334 { 00:35:47.334 "name": "BaseBdev1", 00:35:47.334 "uuid": "754e21bd-adca-4734-ad8e-96bb6c85f7eb", 00:35:47.334 "is_configured": true, 00:35:47.334 "data_offset": 2048, 00:35:47.334 "data_size": 63488 00:35:47.334 }, 00:35:47.334 { 00:35:47.334 "name": "BaseBdev2", 00:35:47.334 "uuid": "9bc351cf-7af1-416d-a321-4c4eca125010", 00:35:47.334 "is_configured": true, 00:35:47.334 "data_offset": 2048, 00:35:47.334 "data_size": 63488 00:35:47.334 }, 00:35:47.334 { 00:35:47.334 "name": "BaseBdev3", 00:35:47.334 "uuid": "154a859e-1602-43af-8302-abf4a0e11f62", 00:35:47.334 "is_configured": true, 00:35:47.334 "data_offset": 2048, 00:35:47.334 "data_size": 63488 00:35:47.334 }, 00:35:47.334 { 00:35:47.334 "name": "BaseBdev4", 00:35:47.334 "uuid": "17fa84a4-a359-4555-bcaa-d832caf771b3", 00:35:47.334 "is_configured": true, 00:35:47.334 "data_offset": 2048, 00:35:47.334 "data_size": 63488 00:35:47.334 } 00:35:47.334 ] 00:35:47.334 }' 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:47.334 17:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.594 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:47.594 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:47.594 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:47.594 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:47.594 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:47.594 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:47.594 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:47.594 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:47.594 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.594 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.594 [2024-11-26 17:32:48.269343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:47.853 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.853 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:47.853 "name": "Existed_Raid", 00:35:47.853 "aliases": [ 00:35:47.853 "a6f00f92-5f1f-40e2-a8dc-ce02a55ce366" 00:35:47.853 ], 00:35:47.853 "product_name": "Raid Volume", 00:35:47.853 "block_size": 512, 00:35:47.853 "num_blocks": 253952, 00:35:47.853 "uuid": "a6f00f92-5f1f-40e2-a8dc-ce02a55ce366", 00:35:47.853 "assigned_rate_limits": { 00:35:47.853 "rw_ios_per_sec": 0, 00:35:47.853 "rw_mbytes_per_sec": 0, 00:35:47.853 "r_mbytes_per_sec": 0, 00:35:47.853 "w_mbytes_per_sec": 0 00:35:47.853 }, 00:35:47.853 "claimed": false, 00:35:47.853 "zoned": false, 00:35:47.853 "supported_io_types": { 00:35:47.853 "read": true, 00:35:47.853 "write": true, 00:35:47.853 "unmap": true, 00:35:47.853 "flush": true, 00:35:47.853 "reset": true, 00:35:47.853 "nvme_admin": false, 00:35:47.853 "nvme_io": false, 00:35:47.853 "nvme_io_md": false, 00:35:47.853 "write_zeroes": true, 00:35:47.853 "zcopy": false, 00:35:47.853 "get_zone_info": false, 00:35:47.853 "zone_management": false, 00:35:47.853 "zone_append": false, 00:35:47.853 "compare": false, 00:35:47.853 "compare_and_write": false, 00:35:47.853 "abort": false, 00:35:47.853 "seek_hole": false, 00:35:47.853 "seek_data": false, 00:35:47.853 "copy": false, 00:35:47.853 "nvme_iov_md": false 00:35:47.853 }, 00:35:47.853 "memory_domains": [ 00:35:47.853 { 00:35:47.853 "dma_device_id": "system", 00:35:47.853 "dma_device_type": 1 00:35:47.853 }, 00:35:47.853 { 00:35:47.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.853 "dma_device_type": 2 00:35:47.853 }, 00:35:47.853 { 00:35:47.853 "dma_device_id": "system", 00:35:47.853 "dma_device_type": 1 00:35:47.853 }, 00:35:47.853 { 00:35:47.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.853 "dma_device_type": 2 00:35:47.853 }, 00:35:47.853 { 00:35:47.853 "dma_device_id": "system", 00:35:47.853 "dma_device_type": 1 00:35:47.853 }, 00:35:47.853 { 00:35:47.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.853 "dma_device_type": 2 00:35:47.853 }, 00:35:47.853 { 00:35:47.853 "dma_device_id": "system", 00:35:47.853 "dma_device_type": 1 00:35:47.853 }, 00:35:47.853 { 00:35:47.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:47.853 "dma_device_type": 2 00:35:47.853 } 00:35:47.853 ], 00:35:47.853 "driver_specific": { 00:35:47.853 "raid": { 00:35:47.853 "uuid": "a6f00f92-5f1f-40e2-a8dc-ce02a55ce366", 00:35:47.853 "strip_size_kb": 64, 00:35:47.853 "state": "online", 00:35:47.853 "raid_level": "raid0", 00:35:47.853 "superblock": true, 00:35:47.853 "num_base_bdevs": 4, 00:35:47.853 "num_base_bdevs_discovered": 4, 00:35:47.853 "num_base_bdevs_operational": 4, 00:35:47.853 "base_bdevs_list": [ 00:35:47.853 { 00:35:47.853 "name": "BaseBdev1", 00:35:47.853 "uuid": "754e21bd-adca-4734-ad8e-96bb6c85f7eb", 00:35:47.853 "is_configured": true, 00:35:47.853 "data_offset": 2048, 00:35:47.853 "data_size": 63488 00:35:47.853 }, 00:35:47.853 { 00:35:47.853 "name": "BaseBdev2", 00:35:47.854 "uuid": "9bc351cf-7af1-416d-a321-4c4eca125010", 00:35:47.854 "is_configured": true, 00:35:47.854 "data_offset": 2048, 00:35:47.854 "data_size": 63488 00:35:47.854 }, 00:35:47.854 { 00:35:47.854 "name": "BaseBdev3", 00:35:47.854 "uuid": "154a859e-1602-43af-8302-abf4a0e11f62", 00:35:47.854 "is_configured": true, 00:35:47.854 "data_offset": 2048, 00:35:47.854 "data_size": 63488 00:35:47.854 }, 00:35:47.854 { 00:35:47.854 "name": "BaseBdev4", 00:35:47.854 "uuid": "17fa84a4-a359-4555-bcaa-d832caf771b3", 00:35:47.854 "is_configured": true, 00:35:47.854 "data_offset": 2048, 00:35:47.854 "data_size": 63488 00:35:47.854 } 00:35:47.854 ] 00:35:47.854 } 00:35:47.854 } 00:35:47.854 }' 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:47.854 BaseBdev2 00:35:47.854 BaseBdev3 00:35:47.854 BaseBdev4' 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:47.854 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.112 [2024-11-26 17:32:48.604559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:48.112 [2024-11-26 17:32:48.604640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:48.112 [2024-11-26 17:32:48.604730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:48.112 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:48.113 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:48.113 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:48.113 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.113 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.113 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.113 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:48.113 "name": "Existed_Raid", 00:35:48.113 "uuid": "a6f00f92-5f1f-40e2-a8dc-ce02a55ce366", 00:35:48.113 "strip_size_kb": 64, 00:35:48.113 "state": "offline", 00:35:48.113 "raid_level": "raid0", 00:35:48.113 "superblock": true, 00:35:48.113 "num_base_bdevs": 4, 00:35:48.113 "num_base_bdevs_discovered": 3, 00:35:48.113 "num_base_bdevs_operational": 3, 00:35:48.113 "base_bdevs_list": [ 00:35:48.113 { 00:35:48.113 "name": null, 00:35:48.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.113 "is_configured": false, 00:35:48.113 "data_offset": 0, 00:35:48.113 "data_size": 63488 00:35:48.113 }, 00:35:48.113 { 00:35:48.113 "name": "BaseBdev2", 00:35:48.113 "uuid": "9bc351cf-7af1-416d-a321-4c4eca125010", 00:35:48.113 "is_configured": true, 00:35:48.113 "data_offset": 2048, 00:35:48.113 "data_size": 63488 00:35:48.113 }, 00:35:48.113 { 00:35:48.113 "name": "BaseBdev3", 00:35:48.113 "uuid": "154a859e-1602-43af-8302-abf4a0e11f62", 00:35:48.113 "is_configured": true, 00:35:48.113 "data_offset": 2048, 00:35:48.113 "data_size": 63488 00:35:48.113 }, 00:35:48.113 { 00:35:48.113 "name": "BaseBdev4", 00:35:48.113 "uuid": "17fa84a4-a359-4555-bcaa-d832caf771b3", 00:35:48.113 "is_configured": true, 00:35:48.113 "data_offset": 2048, 00:35:48.113 "data_size": 63488 00:35:48.113 } 00:35:48.113 ] 00:35:48.113 }' 00:35:48.113 17:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:48.113 17:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.679 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.679 [2024-11-26 17:32:49.224849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:48.680 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.680 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:48.680 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:48.680 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:48.680 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:48.680 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.680 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.680 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.937 [2024-11-26 17:32:49.387558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:48.937 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:35:48.938 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.938 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.938 [2024-11-26 17:32:49.557248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:48.938 [2024-11-26 17:32:49.557304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.196 BaseBdev2 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:49.196 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.197 [ 00:35:49.197 { 00:35:49.197 "name": "BaseBdev2", 00:35:49.197 "aliases": [ 00:35:49.197 "c1ff0855-9da1-4f1c-84ad-67750ce2a573" 00:35:49.197 ], 00:35:49.197 "product_name": "Malloc disk", 00:35:49.197 "block_size": 512, 00:35:49.197 "num_blocks": 65536, 00:35:49.197 "uuid": "c1ff0855-9da1-4f1c-84ad-67750ce2a573", 00:35:49.197 "assigned_rate_limits": { 00:35:49.197 "rw_ios_per_sec": 0, 00:35:49.197 "rw_mbytes_per_sec": 0, 00:35:49.197 "r_mbytes_per_sec": 0, 00:35:49.197 "w_mbytes_per_sec": 0 00:35:49.197 }, 00:35:49.197 "claimed": false, 00:35:49.197 "zoned": false, 00:35:49.197 "supported_io_types": { 00:35:49.197 "read": true, 00:35:49.197 "write": true, 00:35:49.197 "unmap": true, 00:35:49.197 "flush": true, 00:35:49.197 "reset": true, 00:35:49.197 "nvme_admin": false, 00:35:49.197 "nvme_io": false, 00:35:49.197 "nvme_io_md": false, 00:35:49.197 "write_zeroes": true, 00:35:49.197 "zcopy": true, 00:35:49.197 "get_zone_info": false, 00:35:49.197 "zone_management": false, 00:35:49.197 "zone_append": false, 00:35:49.197 "compare": false, 00:35:49.197 "compare_and_write": false, 00:35:49.197 "abort": true, 00:35:49.197 "seek_hole": false, 00:35:49.197 "seek_data": false, 00:35:49.197 "copy": true, 00:35:49.197 "nvme_iov_md": false 00:35:49.197 }, 00:35:49.197 "memory_domains": [ 00:35:49.197 { 00:35:49.197 "dma_device_id": "system", 00:35:49.197 "dma_device_type": 1 00:35:49.197 }, 00:35:49.197 { 00:35:49.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:49.197 "dma_device_type": 2 00:35:49.197 } 00:35:49.197 ], 00:35:49.197 "driver_specific": {} 00:35:49.197 } 00:35:49.197 ] 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.197 BaseBdev3 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.197 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.197 [ 00:35:49.197 { 00:35:49.197 "name": "BaseBdev3", 00:35:49.197 "aliases": [ 00:35:49.197 "3f33b196-f993-4968-a656-f872fb33054d" 00:35:49.197 ], 00:35:49.197 "product_name": "Malloc disk", 00:35:49.197 "block_size": 512, 00:35:49.197 "num_blocks": 65536, 00:35:49.197 "uuid": "3f33b196-f993-4968-a656-f872fb33054d", 00:35:49.197 "assigned_rate_limits": { 00:35:49.197 "rw_ios_per_sec": 0, 00:35:49.197 "rw_mbytes_per_sec": 0, 00:35:49.197 "r_mbytes_per_sec": 0, 00:35:49.197 "w_mbytes_per_sec": 0 00:35:49.197 }, 00:35:49.197 "claimed": false, 00:35:49.197 "zoned": false, 00:35:49.197 "supported_io_types": { 00:35:49.197 "read": true, 00:35:49.197 "write": true, 00:35:49.197 "unmap": true, 00:35:49.197 "flush": true, 00:35:49.197 "reset": true, 00:35:49.197 "nvme_admin": false, 00:35:49.197 "nvme_io": false, 00:35:49.197 "nvme_io_md": false, 00:35:49.197 "write_zeroes": true, 00:35:49.197 "zcopy": true, 00:35:49.197 "get_zone_info": false, 00:35:49.197 "zone_management": false, 00:35:49.197 "zone_append": false, 00:35:49.197 "compare": false, 00:35:49.197 "compare_and_write": false, 00:35:49.197 "abort": true, 00:35:49.197 "seek_hole": false, 00:35:49.197 "seek_data": false, 00:35:49.197 "copy": true, 00:35:49.197 "nvme_iov_md": false 00:35:49.197 }, 00:35:49.197 "memory_domains": [ 00:35:49.197 { 00:35:49.197 "dma_device_id": "system", 00:35:49.197 "dma_device_type": 1 00:35:49.197 }, 00:35:49.197 { 00:35:49.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:49.197 "dma_device_type": 2 00:35:49.197 } 00:35:49.197 ], 00:35:49.197 "driver_specific": {} 00:35:49.197 } 00:35:49.197 ] 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.457 BaseBdev4 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.457 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.457 [ 00:35:49.457 { 00:35:49.457 "name": "BaseBdev4", 00:35:49.457 "aliases": [ 00:35:49.457 "35901999-d653-4131-82dc-e56e98398fdb" 00:35:49.457 ], 00:35:49.457 "product_name": "Malloc disk", 00:35:49.457 "block_size": 512, 00:35:49.457 "num_blocks": 65536, 00:35:49.457 "uuid": "35901999-d653-4131-82dc-e56e98398fdb", 00:35:49.457 "assigned_rate_limits": { 00:35:49.457 "rw_ios_per_sec": 0, 00:35:49.457 "rw_mbytes_per_sec": 0, 00:35:49.457 "r_mbytes_per_sec": 0, 00:35:49.457 "w_mbytes_per_sec": 0 00:35:49.457 }, 00:35:49.457 "claimed": false, 00:35:49.457 "zoned": false, 00:35:49.457 "supported_io_types": { 00:35:49.457 "read": true, 00:35:49.457 "write": true, 00:35:49.457 "unmap": true, 00:35:49.457 "flush": true, 00:35:49.457 "reset": true, 00:35:49.457 "nvme_admin": false, 00:35:49.457 "nvme_io": false, 00:35:49.457 "nvme_io_md": false, 00:35:49.457 "write_zeroes": true, 00:35:49.457 "zcopy": true, 00:35:49.457 "get_zone_info": false, 00:35:49.457 "zone_management": false, 00:35:49.457 "zone_append": false, 00:35:49.457 "compare": false, 00:35:49.457 "compare_and_write": false, 00:35:49.457 "abort": true, 00:35:49.457 "seek_hole": false, 00:35:49.457 "seek_data": false, 00:35:49.457 "copy": true, 00:35:49.457 "nvme_iov_md": false 00:35:49.457 }, 00:35:49.457 "memory_domains": [ 00:35:49.457 { 00:35:49.457 "dma_device_id": "system", 00:35:49.457 "dma_device_type": 1 00:35:49.457 }, 00:35:49.457 { 00:35:49.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:49.457 "dma_device_type": 2 00:35:49.457 } 00:35:49.457 ], 00:35:49.457 "driver_specific": {} 00:35:49.457 } 00:35:49.457 ] 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.458 [2024-11-26 17:32:49.991609] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:49.458 [2024-11-26 17:32:49.991713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:49.458 [2024-11-26 17:32:49.991770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:49.458 [2024-11-26 17:32:49.994001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:49.458 [2024-11-26 17:32:49.994111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:49.458 17:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:49.458 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.458 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.458 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:49.458 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.458 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.458 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:49.458 "name": "Existed_Raid", 00:35:49.458 "uuid": "c725495f-c8ff-4c63-9629-ab7f5105cacd", 00:35:49.458 "strip_size_kb": 64, 00:35:49.458 "state": "configuring", 00:35:49.458 "raid_level": "raid0", 00:35:49.458 "superblock": true, 00:35:49.458 "num_base_bdevs": 4, 00:35:49.458 "num_base_bdevs_discovered": 3, 00:35:49.458 "num_base_bdevs_operational": 4, 00:35:49.458 "base_bdevs_list": [ 00:35:49.458 { 00:35:49.458 "name": "BaseBdev1", 00:35:49.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.458 "is_configured": false, 00:35:49.458 "data_offset": 0, 00:35:49.458 "data_size": 0 00:35:49.458 }, 00:35:49.458 { 00:35:49.458 "name": "BaseBdev2", 00:35:49.458 "uuid": "c1ff0855-9da1-4f1c-84ad-67750ce2a573", 00:35:49.458 "is_configured": true, 00:35:49.458 "data_offset": 2048, 00:35:49.458 "data_size": 63488 00:35:49.458 }, 00:35:49.458 { 00:35:49.458 "name": "BaseBdev3", 00:35:49.458 "uuid": "3f33b196-f993-4968-a656-f872fb33054d", 00:35:49.458 "is_configured": true, 00:35:49.458 "data_offset": 2048, 00:35:49.458 "data_size": 63488 00:35:49.458 }, 00:35:49.458 { 00:35:49.458 "name": "BaseBdev4", 00:35:49.458 "uuid": "35901999-d653-4131-82dc-e56e98398fdb", 00:35:49.458 "is_configured": true, 00:35:49.458 "data_offset": 2048, 00:35:49.458 "data_size": 63488 00:35:49.458 } 00:35:49.458 ] 00:35:49.458 }' 00:35:49.458 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:49.458 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.026 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.027 [2024-11-26 17:32:50.474752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:50.027 "name": "Existed_Raid", 00:35:50.027 "uuid": "c725495f-c8ff-4c63-9629-ab7f5105cacd", 00:35:50.027 "strip_size_kb": 64, 00:35:50.027 "state": "configuring", 00:35:50.027 "raid_level": "raid0", 00:35:50.027 "superblock": true, 00:35:50.027 "num_base_bdevs": 4, 00:35:50.027 "num_base_bdevs_discovered": 2, 00:35:50.027 "num_base_bdevs_operational": 4, 00:35:50.027 "base_bdevs_list": [ 00:35:50.027 { 00:35:50.027 "name": "BaseBdev1", 00:35:50.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:50.027 "is_configured": false, 00:35:50.027 "data_offset": 0, 00:35:50.027 "data_size": 0 00:35:50.027 }, 00:35:50.027 { 00:35:50.027 "name": null, 00:35:50.027 "uuid": "c1ff0855-9da1-4f1c-84ad-67750ce2a573", 00:35:50.027 "is_configured": false, 00:35:50.027 "data_offset": 0, 00:35:50.027 "data_size": 63488 00:35:50.027 }, 00:35:50.027 { 00:35:50.027 "name": "BaseBdev3", 00:35:50.027 "uuid": "3f33b196-f993-4968-a656-f872fb33054d", 00:35:50.027 "is_configured": true, 00:35:50.027 "data_offset": 2048, 00:35:50.027 "data_size": 63488 00:35:50.027 }, 00:35:50.027 { 00:35:50.027 "name": "BaseBdev4", 00:35:50.027 "uuid": "35901999-d653-4131-82dc-e56e98398fdb", 00:35:50.027 "is_configured": true, 00:35:50.027 "data_offset": 2048, 00:35:50.027 "data_size": 63488 00:35:50.027 } 00:35:50.027 ] 00:35:50.027 }' 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:50.027 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.286 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.286 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.286 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.286 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:50.286 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.286 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:35:50.286 17:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:50.286 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.286 17:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.546 [2024-11-26 17:32:51.003565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:50.546 BaseBdev1 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.546 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.546 [ 00:35:50.546 { 00:35:50.546 "name": "BaseBdev1", 00:35:50.546 "aliases": [ 00:35:50.546 "e6b160c1-dd97-4365-826e-1c55dd93edb9" 00:35:50.546 ], 00:35:50.546 "product_name": "Malloc disk", 00:35:50.546 "block_size": 512, 00:35:50.546 "num_blocks": 65536, 00:35:50.546 "uuid": "e6b160c1-dd97-4365-826e-1c55dd93edb9", 00:35:50.546 "assigned_rate_limits": { 00:35:50.546 "rw_ios_per_sec": 0, 00:35:50.546 "rw_mbytes_per_sec": 0, 00:35:50.546 "r_mbytes_per_sec": 0, 00:35:50.546 "w_mbytes_per_sec": 0 00:35:50.546 }, 00:35:50.546 "claimed": true, 00:35:50.546 "claim_type": "exclusive_write", 00:35:50.546 "zoned": false, 00:35:50.546 "supported_io_types": { 00:35:50.546 "read": true, 00:35:50.546 "write": true, 00:35:50.546 "unmap": true, 00:35:50.546 "flush": true, 00:35:50.546 "reset": true, 00:35:50.546 "nvme_admin": false, 00:35:50.546 "nvme_io": false, 00:35:50.546 "nvme_io_md": false, 00:35:50.546 "write_zeroes": true, 00:35:50.546 "zcopy": true, 00:35:50.546 "get_zone_info": false, 00:35:50.546 "zone_management": false, 00:35:50.546 "zone_append": false, 00:35:50.546 "compare": false, 00:35:50.546 "compare_and_write": false, 00:35:50.546 "abort": true, 00:35:50.546 "seek_hole": false, 00:35:50.546 "seek_data": false, 00:35:50.546 "copy": true, 00:35:50.546 "nvme_iov_md": false 00:35:50.546 }, 00:35:50.546 "memory_domains": [ 00:35:50.546 { 00:35:50.546 "dma_device_id": "system", 00:35:50.546 "dma_device_type": 1 00:35:50.546 }, 00:35:50.546 { 00:35:50.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:50.547 "dma_device_type": 2 00:35:50.547 } 00:35:50.547 ], 00:35:50.547 "driver_specific": {} 00:35:50.547 } 00:35:50.547 ] 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:50.547 "name": "Existed_Raid", 00:35:50.547 "uuid": "c725495f-c8ff-4c63-9629-ab7f5105cacd", 00:35:50.547 "strip_size_kb": 64, 00:35:50.547 "state": "configuring", 00:35:50.547 "raid_level": "raid0", 00:35:50.547 "superblock": true, 00:35:50.547 "num_base_bdevs": 4, 00:35:50.547 "num_base_bdevs_discovered": 3, 00:35:50.547 "num_base_bdevs_operational": 4, 00:35:50.547 "base_bdevs_list": [ 00:35:50.547 { 00:35:50.547 "name": "BaseBdev1", 00:35:50.547 "uuid": "e6b160c1-dd97-4365-826e-1c55dd93edb9", 00:35:50.547 "is_configured": true, 00:35:50.547 "data_offset": 2048, 00:35:50.547 "data_size": 63488 00:35:50.547 }, 00:35:50.547 { 00:35:50.547 "name": null, 00:35:50.547 "uuid": "c1ff0855-9da1-4f1c-84ad-67750ce2a573", 00:35:50.547 "is_configured": false, 00:35:50.547 "data_offset": 0, 00:35:50.547 "data_size": 63488 00:35:50.547 }, 00:35:50.547 { 00:35:50.547 "name": "BaseBdev3", 00:35:50.547 "uuid": "3f33b196-f993-4968-a656-f872fb33054d", 00:35:50.547 "is_configured": true, 00:35:50.547 "data_offset": 2048, 00:35:50.547 "data_size": 63488 00:35:50.547 }, 00:35:50.547 { 00:35:50.547 "name": "BaseBdev4", 00:35:50.547 "uuid": "35901999-d653-4131-82dc-e56e98398fdb", 00:35:50.547 "is_configured": true, 00:35:50.547 "data_offset": 2048, 00:35:50.547 "data_size": 63488 00:35:50.547 } 00:35:50.547 ] 00:35:50.547 }' 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:50.547 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.122 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.122 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.122 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.123 [2024-11-26 17:32:51.582670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:51.123 "name": "Existed_Raid", 00:35:51.123 "uuid": "c725495f-c8ff-4c63-9629-ab7f5105cacd", 00:35:51.123 "strip_size_kb": 64, 00:35:51.123 "state": "configuring", 00:35:51.123 "raid_level": "raid0", 00:35:51.123 "superblock": true, 00:35:51.123 "num_base_bdevs": 4, 00:35:51.123 "num_base_bdevs_discovered": 2, 00:35:51.123 "num_base_bdevs_operational": 4, 00:35:51.123 "base_bdevs_list": [ 00:35:51.123 { 00:35:51.123 "name": "BaseBdev1", 00:35:51.123 "uuid": "e6b160c1-dd97-4365-826e-1c55dd93edb9", 00:35:51.123 "is_configured": true, 00:35:51.123 "data_offset": 2048, 00:35:51.123 "data_size": 63488 00:35:51.123 }, 00:35:51.123 { 00:35:51.123 "name": null, 00:35:51.123 "uuid": "c1ff0855-9da1-4f1c-84ad-67750ce2a573", 00:35:51.123 "is_configured": false, 00:35:51.123 "data_offset": 0, 00:35:51.123 "data_size": 63488 00:35:51.123 }, 00:35:51.123 { 00:35:51.123 "name": null, 00:35:51.123 "uuid": "3f33b196-f993-4968-a656-f872fb33054d", 00:35:51.123 "is_configured": false, 00:35:51.123 "data_offset": 0, 00:35:51.123 "data_size": 63488 00:35:51.123 }, 00:35:51.123 { 00:35:51.123 "name": "BaseBdev4", 00:35:51.123 "uuid": "35901999-d653-4131-82dc-e56e98398fdb", 00:35:51.123 "is_configured": true, 00:35:51.123 "data_offset": 2048, 00:35:51.123 "data_size": 63488 00:35:51.123 } 00:35:51.123 ] 00:35:51.123 }' 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:51.123 17:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.382 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.382 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.382 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.382 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:51.382 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.382 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:35:51.382 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:51.382 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.382 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.382 [2024-11-26 17:32:52.073821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:51.641 "name": "Existed_Raid", 00:35:51.641 "uuid": "c725495f-c8ff-4c63-9629-ab7f5105cacd", 00:35:51.641 "strip_size_kb": 64, 00:35:51.641 "state": "configuring", 00:35:51.641 "raid_level": "raid0", 00:35:51.641 "superblock": true, 00:35:51.641 "num_base_bdevs": 4, 00:35:51.641 "num_base_bdevs_discovered": 3, 00:35:51.641 "num_base_bdevs_operational": 4, 00:35:51.641 "base_bdevs_list": [ 00:35:51.641 { 00:35:51.641 "name": "BaseBdev1", 00:35:51.641 "uuid": "e6b160c1-dd97-4365-826e-1c55dd93edb9", 00:35:51.641 "is_configured": true, 00:35:51.641 "data_offset": 2048, 00:35:51.641 "data_size": 63488 00:35:51.641 }, 00:35:51.641 { 00:35:51.641 "name": null, 00:35:51.641 "uuid": "c1ff0855-9da1-4f1c-84ad-67750ce2a573", 00:35:51.641 "is_configured": false, 00:35:51.641 "data_offset": 0, 00:35:51.641 "data_size": 63488 00:35:51.641 }, 00:35:51.641 { 00:35:51.641 "name": "BaseBdev3", 00:35:51.641 "uuid": "3f33b196-f993-4968-a656-f872fb33054d", 00:35:51.641 "is_configured": true, 00:35:51.641 "data_offset": 2048, 00:35:51.641 "data_size": 63488 00:35:51.641 }, 00:35:51.641 { 00:35:51.641 "name": "BaseBdev4", 00:35:51.641 "uuid": "35901999-d653-4131-82dc-e56e98398fdb", 00:35:51.641 "is_configured": true, 00:35:51.641 "data_offset": 2048, 00:35:51.641 "data_size": 63488 00:35:51.641 } 00:35:51.641 ] 00:35:51.641 }' 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:51.641 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.901 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.901 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.901 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.901 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:51.901 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.901 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:35:51.901 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:51.901 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.901 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.901 [2024-11-26 17:32:52.541096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:52.162 "name": "Existed_Raid", 00:35:52.162 "uuid": "c725495f-c8ff-4c63-9629-ab7f5105cacd", 00:35:52.162 "strip_size_kb": 64, 00:35:52.162 "state": "configuring", 00:35:52.162 "raid_level": "raid0", 00:35:52.162 "superblock": true, 00:35:52.162 "num_base_bdevs": 4, 00:35:52.162 "num_base_bdevs_discovered": 2, 00:35:52.162 "num_base_bdevs_operational": 4, 00:35:52.162 "base_bdevs_list": [ 00:35:52.162 { 00:35:52.162 "name": null, 00:35:52.162 "uuid": "e6b160c1-dd97-4365-826e-1c55dd93edb9", 00:35:52.162 "is_configured": false, 00:35:52.162 "data_offset": 0, 00:35:52.162 "data_size": 63488 00:35:52.162 }, 00:35:52.162 { 00:35:52.162 "name": null, 00:35:52.162 "uuid": "c1ff0855-9da1-4f1c-84ad-67750ce2a573", 00:35:52.162 "is_configured": false, 00:35:52.162 "data_offset": 0, 00:35:52.162 "data_size": 63488 00:35:52.162 }, 00:35:52.162 { 00:35:52.162 "name": "BaseBdev3", 00:35:52.162 "uuid": "3f33b196-f993-4968-a656-f872fb33054d", 00:35:52.162 "is_configured": true, 00:35:52.162 "data_offset": 2048, 00:35:52.162 "data_size": 63488 00:35:52.162 }, 00:35:52.162 { 00:35:52.162 "name": "BaseBdev4", 00:35:52.162 "uuid": "35901999-d653-4131-82dc-e56e98398fdb", 00:35:52.162 "is_configured": true, 00:35:52.162 "data_offset": 2048, 00:35:52.162 "data_size": 63488 00:35:52.162 } 00:35:52.162 ] 00:35:52.162 }' 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:52.162 17:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.421 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:52.421 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.421 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.421 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.421 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.681 [2024-11-26 17:32:53.126529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:52.681 "name": "Existed_Raid", 00:35:52.681 "uuid": "c725495f-c8ff-4c63-9629-ab7f5105cacd", 00:35:52.681 "strip_size_kb": 64, 00:35:52.681 "state": "configuring", 00:35:52.681 "raid_level": "raid0", 00:35:52.681 "superblock": true, 00:35:52.681 "num_base_bdevs": 4, 00:35:52.681 "num_base_bdevs_discovered": 3, 00:35:52.681 "num_base_bdevs_operational": 4, 00:35:52.681 "base_bdevs_list": [ 00:35:52.681 { 00:35:52.681 "name": null, 00:35:52.681 "uuid": "e6b160c1-dd97-4365-826e-1c55dd93edb9", 00:35:52.681 "is_configured": false, 00:35:52.681 "data_offset": 0, 00:35:52.681 "data_size": 63488 00:35:52.681 }, 00:35:52.681 { 00:35:52.681 "name": "BaseBdev2", 00:35:52.681 "uuid": "c1ff0855-9da1-4f1c-84ad-67750ce2a573", 00:35:52.681 "is_configured": true, 00:35:52.681 "data_offset": 2048, 00:35:52.681 "data_size": 63488 00:35:52.681 }, 00:35:52.681 { 00:35:52.681 "name": "BaseBdev3", 00:35:52.681 "uuid": "3f33b196-f993-4968-a656-f872fb33054d", 00:35:52.681 "is_configured": true, 00:35:52.681 "data_offset": 2048, 00:35:52.681 "data_size": 63488 00:35:52.681 }, 00:35:52.681 { 00:35:52.681 "name": "BaseBdev4", 00:35:52.681 "uuid": "35901999-d653-4131-82dc-e56e98398fdb", 00:35:52.681 "is_configured": true, 00:35:52.681 "data_offset": 2048, 00:35:52.681 "data_size": 63488 00:35:52.681 } 00:35:52.681 ] 00:35:52.681 }' 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:52.681 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.941 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:52.941 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.941 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.941 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.942 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.942 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:35:52.942 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:52.942 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.942 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.942 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.942 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e6b160c1-dd97-4365-826e-1c55dd93edb9 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.202 [2024-11-26 17:32:53.680865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:53.202 [2024-11-26 17:32:53.681228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:53.202 [2024-11-26 17:32:53.681282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:53.202 [2024-11-26 17:32:53.681617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:53.202 [2024-11-26 17:32:53.681824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:53.202 [2024-11-26 17:32:53.681872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:35:53.202 NewBaseBdev 00:35:53.202 [2024-11-26 17:32:53.682067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.202 [ 00:35:53.202 { 00:35:53.202 "name": "NewBaseBdev", 00:35:53.202 "aliases": [ 00:35:53.202 "e6b160c1-dd97-4365-826e-1c55dd93edb9" 00:35:53.202 ], 00:35:53.202 "product_name": "Malloc disk", 00:35:53.202 "block_size": 512, 00:35:53.202 "num_blocks": 65536, 00:35:53.202 "uuid": "e6b160c1-dd97-4365-826e-1c55dd93edb9", 00:35:53.202 "assigned_rate_limits": { 00:35:53.202 "rw_ios_per_sec": 0, 00:35:53.202 "rw_mbytes_per_sec": 0, 00:35:53.202 "r_mbytes_per_sec": 0, 00:35:53.202 "w_mbytes_per_sec": 0 00:35:53.202 }, 00:35:53.202 "claimed": true, 00:35:53.202 "claim_type": "exclusive_write", 00:35:53.202 "zoned": false, 00:35:53.202 "supported_io_types": { 00:35:53.202 "read": true, 00:35:53.202 "write": true, 00:35:53.202 "unmap": true, 00:35:53.202 "flush": true, 00:35:53.202 "reset": true, 00:35:53.202 "nvme_admin": false, 00:35:53.202 "nvme_io": false, 00:35:53.202 "nvme_io_md": false, 00:35:53.202 "write_zeroes": true, 00:35:53.202 "zcopy": true, 00:35:53.202 "get_zone_info": false, 00:35:53.202 "zone_management": false, 00:35:53.202 "zone_append": false, 00:35:53.202 "compare": false, 00:35:53.202 "compare_and_write": false, 00:35:53.202 "abort": true, 00:35:53.202 "seek_hole": false, 00:35:53.202 "seek_data": false, 00:35:53.202 "copy": true, 00:35:53.202 "nvme_iov_md": false 00:35:53.202 }, 00:35:53.202 "memory_domains": [ 00:35:53.202 { 00:35:53.202 "dma_device_id": "system", 00:35:53.202 "dma_device_type": 1 00:35:53.202 }, 00:35:53.202 { 00:35:53.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.202 "dma_device_type": 2 00:35:53.202 } 00:35:53.202 ], 00:35:53.202 "driver_specific": {} 00:35:53.202 } 00:35:53.202 ] 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:53.202 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:53.203 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.203 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.203 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:53.203 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.203 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:53.203 "name": "Existed_Raid", 00:35:53.203 "uuid": "c725495f-c8ff-4c63-9629-ab7f5105cacd", 00:35:53.203 "strip_size_kb": 64, 00:35:53.203 "state": "online", 00:35:53.203 "raid_level": "raid0", 00:35:53.203 "superblock": true, 00:35:53.203 "num_base_bdevs": 4, 00:35:53.203 "num_base_bdevs_discovered": 4, 00:35:53.203 "num_base_bdevs_operational": 4, 00:35:53.203 "base_bdevs_list": [ 00:35:53.203 { 00:35:53.203 "name": "NewBaseBdev", 00:35:53.203 "uuid": "e6b160c1-dd97-4365-826e-1c55dd93edb9", 00:35:53.203 "is_configured": true, 00:35:53.203 "data_offset": 2048, 00:35:53.203 "data_size": 63488 00:35:53.203 }, 00:35:53.203 { 00:35:53.203 "name": "BaseBdev2", 00:35:53.203 "uuid": "c1ff0855-9da1-4f1c-84ad-67750ce2a573", 00:35:53.203 "is_configured": true, 00:35:53.203 "data_offset": 2048, 00:35:53.203 "data_size": 63488 00:35:53.203 }, 00:35:53.203 { 00:35:53.203 "name": "BaseBdev3", 00:35:53.203 "uuid": "3f33b196-f993-4968-a656-f872fb33054d", 00:35:53.203 "is_configured": true, 00:35:53.203 "data_offset": 2048, 00:35:53.203 "data_size": 63488 00:35:53.203 }, 00:35:53.203 { 00:35:53.203 "name": "BaseBdev4", 00:35:53.203 "uuid": "35901999-d653-4131-82dc-e56e98398fdb", 00:35:53.203 "is_configured": true, 00:35:53.203 "data_offset": 2048, 00:35:53.203 "data_size": 63488 00:35:53.203 } 00:35:53.203 ] 00:35:53.203 }' 00:35:53.203 17:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:53.203 17:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.771 [2024-11-26 17:32:54.196541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.771 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:53.771 "name": "Existed_Raid", 00:35:53.771 "aliases": [ 00:35:53.771 "c725495f-c8ff-4c63-9629-ab7f5105cacd" 00:35:53.771 ], 00:35:53.771 "product_name": "Raid Volume", 00:35:53.771 "block_size": 512, 00:35:53.771 "num_blocks": 253952, 00:35:53.771 "uuid": "c725495f-c8ff-4c63-9629-ab7f5105cacd", 00:35:53.771 "assigned_rate_limits": { 00:35:53.771 "rw_ios_per_sec": 0, 00:35:53.771 "rw_mbytes_per_sec": 0, 00:35:53.771 "r_mbytes_per_sec": 0, 00:35:53.771 "w_mbytes_per_sec": 0 00:35:53.771 }, 00:35:53.771 "claimed": false, 00:35:53.771 "zoned": false, 00:35:53.771 "supported_io_types": { 00:35:53.771 "read": true, 00:35:53.771 "write": true, 00:35:53.771 "unmap": true, 00:35:53.771 "flush": true, 00:35:53.771 "reset": true, 00:35:53.771 "nvme_admin": false, 00:35:53.771 "nvme_io": false, 00:35:53.771 "nvme_io_md": false, 00:35:53.771 "write_zeroes": true, 00:35:53.771 "zcopy": false, 00:35:53.771 "get_zone_info": false, 00:35:53.771 "zone_management": false, 00:35:53.771 "zone_append": false, 00:35:53.771 "compare": false, 00:35:53.771 "compare_and_write": false, 00:35:53.771 "abort": false, 00:35:53.771 "seek_hole": false, 00:35:53.771 "seek_data": false, 00:35:53.771 "copy": false, 00:35:53.771 "nvme_iov_md": false 00:35:53.771 }, 00:35:53.771 "memory_domains": [ 00:35:53.771 { 00:35:53.771 "dma_device_id": "system", 00:35:53.771 "dma_device_type": 1 00:35:53.771 }, 00:35:53.771 { 00:35:53.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.771 "dma_device_type": 2 00:35:53.771 }, 00:35:53.771 { 00:35:53.771 "dma_device_id": "system", 00:35:53.771 "dma_device_type": 1 00:35:53.771 }, 00:35:53.771 { 00:35:53.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.771 "dma_device_type": 2 00:35:53.771 }, 00:35:53.771 { 00:35:53.771 "dma_device_id": "system", 00:35:53.771 "dma_device_type": 1 00:35:53.771 }, 00:35:53.771 { 00:35:53.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.771 "dma_device_type": 2 00:35:53.771 }, 00:35:53.771 { 00:35:53.771 "dma_device_id": "system", 00:35:53.771 "dma_device_type": 1 00:35:53.771 }, 00:35:53.771 { 00:35:53.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:53.771 "dma_device_type": 2 00:35:53.772 } 00:35:53.772 ], 00:35:53.772 "driver_specific": { 00:35:53.772 "raid": { 00:35:53.772 "uuid": "c725495f-c8ff-4c63-9629-ab7f5105cacd", 00:35:53.772 "strip_size_kb": 64, 00:35:53.772 "state": "online", 00:35:53.772 "raid_level": "raid0", 00:35:53.772 "superblock": true, 00:35:53.772 "num_base_bdevs": 4, 00:35:53.772 "num_base_bdevs_discovered": 4, 00:35:53.772 "num_base_bdevs_operational": 4, 00:35:53.772 "base_bdevs_list": [ 00:35:53.772 { 00:35:53.772 "name": "NewBaseBdev", 00:35:53.772 "uuid": "e6b160c1-dd97-4365-826e-1c55dd93edb9", 00:35:53.772 "is_configured": true, 00:35:53.772 "data_offset": 2048, 00:35:53.772 "data_size": 63488 00:35:53.772 }, 00:35:53.772 { 00:35:53.772 "name": "BaseBdev2", 00:35:53.772 "uuid": "c1ff0855-9da1-4f1c-84ad-67750ce2a573", 00:35:53.772 "is_configured": true, 00:35:53.772 "data_offset": 2048, 00:35:53.772 "data_size": 63488 00:35:53.772 }, 00:35:53.772 { 00:35:53.772 "name": "BaseBdev3", 00:35:53.772 "uuid": "3f33b196-f993-4968-a656-f872fb33054d", 00:35:53.772 "is_configured": true, 00:35:53.772 "data_offset": 2048, 00:35:53.772 "data_size": 63488 00:35:53.772 }, 00:35:53.772 { 00:35:53.772 "name": "BaseBdev4", 00:35:53.772 "uuid": "35901999-d653-4131-82dc-e56e98398fdb", 00:35:53.772 "is_configured": true, 00:35:53.772 "data_offset": 2048, 00:35:53.772 "data_size": 63488 00:35:53.772 } 00:35:53.772 ] 00:35:53.772 } 00:35:53.772 } 00:35:53.772 }' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:35:53.772 BaseBdev2 00:35:53.772 BaseBdev3 00:35:53.772 BaseBdev4' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:53.772 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:54.032 [2024-11-26 17:32:54.515582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:54.032 [2024-11-26 17:32:54.515618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:54.032 [2024-11-26 17:32:54.515712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:54.032 [2024-11-26 17:32:54.515787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:54.032 [2024-11-26 17:32:54.515798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70313 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70313 ']' 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70313 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70313 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70313' 00:35:54.032 killing process with pid 70313 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70313 00:35:54.032 [2024-11-26 17:32:54.561171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:54.032 17:32:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70313 00:35:54.601 [2024-11-26 17:32:54.993046] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:55.981 17:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:35:55.981 00:35:55.981 real 0m12.072s 00:35:55.981 user 0m19.012s 00:35:55.981 sys 0m2.108s 00:35:55.981 17:32:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.981 17:32:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:55.981 ************************************ 00:35:55.981 END TEST raid_state_function_test_sb 00:35:55.981 ************************************ 00:35:55.981 17:32:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:35:55.981 17:32:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:55.981 17:32:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:55.981 17:32:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:55.981 ************************************ 00:35:55.981 START TEST raid_superblock_test 00:35:55.981 ************************************ 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70991 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70991 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70991 ']' 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.981 17:32:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.981 [2024-11-26 17:32:56.396101] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:55.981 [2024-11-26 17:32:56.396232] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70991 ] 00:35:55.981 [2024-11-26 17:32:56.568737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.240 [2024-11-26 17:32:56.690865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.240 [2024-11-26 17:32:56.921620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:56.240 [2024-11-26 17:32:56.921698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.808 malloc1 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.808 [2024-11-26 17:32:57.349079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:56.808 [2024-11-26 17:32:57.349218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.808 [2024-11-26 17:32:57.349284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:56.808 [2024-11-26 17:32:57.349327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.808 [2024-11-26 17:32:57.351907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.808 [2024-11-26 17:32:57.352004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:56.808 pt1 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.808 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.809 malloc2 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.809 [2024-11-26 17:32:57.406884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:56.809 [2024-11-26 17:32:57.407006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.809 [2024-11-26 17:32:57.407058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:56.809 [2024-11-26 17:32:57.407096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.809 [2024-11-26 17:32:57.409557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.809 [2024-11-26 17:32:57.409638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:56.809 pt2 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.809 malloc3 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.809 [2024-11-26 17:32:57.482590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:56.809 [2024-11-26 17:32:57.482653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:56.809 [2024-11-26 17:32:57.482678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:56.809 [2024-11-26 17:32:57.482690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:56.809 [2024-11-26 17:32:57.485180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:56.809 [2024-11-26 17:32:57.485230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:56.809 pt3 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.809 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.068 malloc4 00:35:57.068 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.069 [2024-11-26 17:32:57.545234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:57.069 [2024-11-26 17:32:57.545377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:57.069 [2024-11-26 17:32:57.545445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:57.069 [2024-11-26 17:32:57.545483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:57.069 [2024-11-26 17:32:57.548009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:57.069 [2024-11-26 17:32:57.548105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:57.069 pt4 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.069 [2024-11-26 17:32:57.557243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:57.069 [2024-11-26 17:32:57.559392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:57.069 [2024-11-26 17:32:57.559558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:57.069 [2024-11-26 17:32:57.559659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:57.069 [2024-11-26 17:32:57.559928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:57.069 [2024-11-26 17:32:57.559989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:57.069 [2024-11-26 17:32:57.560330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:57.069 [2024-11-26 17:32:57.560596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:57.069 [2024-11-26 17:32:57.560650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:57.069 [2024-11-26 17:32:57.560914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:57.069 "name": "raid_bdev1", 00:35:57.069 "uuid": "9c209f7e-930d-4140-813c-b3fd3b11e468", 00:35:57.069 "strip_size_kb": 64, 00:35:57.069 "state": "online", 00:35:57.069 "raid_level": "raid0", 00:35:57.069 "superblock": true, 00:35:57.069 "num_base_bdevs": 4, 00:35:57.069 "num_base_bdevs_discovered": 4, 00:35:57.069 "num_base_bdevs_operational": 4, 00:35:57.069 "base_bdevs_list": [ 00:35:57.069 { 00:35:57.069 "name": "pt1", 00:35:57.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:57.069 "is_configured": true, 00:35:57.069 "data_offset": 2048, 00:35:57.069 "data_size": 63488 00:35:57.069 }, 00:35:57.069 { 00:35:57.069 "name": "pt2", 00:35:57.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:57.069 "is_configured": true, 00:35:57.069 "data_offset": 2048, 00:35:57.069 "data_size": 63488 00:35:57.069 }, 00:35:57.069 { 00:35:57.069 "name": "pt3", 00:35:57.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:57.069 "is_configured": true, 00:35:57.069 "data_offset": 2048, 00:35:57.069 "data_size": 63488 00:35:57.069 }, 00:35:57.069 { 00:35:57.069 "name": "pt4", 00:35:57.069 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:57.069 "is_configured": true, 00:35:57.069 "data_offset": 2048, 00:35:57.069 "data_size": 63488 00:35:57.069 } 00:35:57.069 ] 00:35:57.069 }' 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:57.069 17:32:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.638 [2024-11-26 17:32:58.032864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:57.638 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:57.639 "name": "raid_bdev1", 00:35:57.639 "aliases": [ 00:35:57.639 "9c209f7e-930d-4140-813c-b3fd3b11e468" 00:35:57.639 ], 00:35:57.639 "product_name": "Raid Volume", 00:35:57.639 "block_size": 512, 00:35:57.639 "num_blocks": 253952, 00:35:57.639 "uuid": "9c209f7e-930d-4140-813c-b3fd3b11e468", 00:35:57.639 "assigned_rate_limits": { 00:35:57.639 "rw_ios_per_sec": 0, 00:35:57.639 "rw_mbytes_per_sec": 0, 00:35:57.639 "r_mbytes_per_sec": 0, 00:35:57.639 "w_mbytes_per_sec": 0 00:35:57.639 }, 00:35:57.639 "claimed": false, 00:35:57.639 "zoned": false, 00:35:57.639 "supported_io_types": { 00:35:57.639 "read": true, 00:35:57.639 "write": true, 00:35:57.639 "unmap": true, 00:35:57.639 "flush": true, 00:35:57.639 "reset": true, 00:35:57.639 "nvme_admin": false, 00:35:57.639 "nvme_io": false, 00:35:57.639 "nvme_io_md": false, 00:35:57.639 "write_zeroes": true, 00:35:57.639 "zcopy": false, 00:35:57.639 "get_zone_info": false, 00:35:57.639 "zone_management": false, 00:35:57.639 "zone_append": false, 00:35:57.639 "compare": false, 00:35:57.639 "compare_and_write": false, 00:35:57.639 "abort": false, 00:35:57.639 "seek_hole": false, 00:35:57.639 "seek_data": false, 00:35:57.639 "copy": false, 00:35:57.639 "nvme_iov_md": false 00:35:57.639 }, 00:35:57.639 "memory_domains": [ 00:35:57.639 { 00:35:57.639 "dma_device_id": "system", 00:35:57.639 "dma_device_type": 1 00:35:57.639 }, 00:35:57.639 { 00:35:57.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:57.639 "dma_device_type": 2 00:35:57.639 }, 00:35:57.639 { 00:35:57.639 "dma_device_id": "system", 00:35:57.639 "dma_device_type": 1 00:35:57.639 }, 00:35:57.639 { 00:35:57.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:57.639 "dma_device_type": 2 00:35:57.639 }, 00:35:57.639 { 00:35:57.639 "dma_device_id": "system", 00:35:57.639 "dma_device_type": 1 00:35:57.639 }, 00:35:57.639 { 00:35:57.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:57.639 "dma_device_type": 2 00:35:57.639 }, 00:35:57.639 { 00:35:57.639 "dma_device_id": "system", 00:35:57.639 "dma_device_type": 1 00:35:57.639 }, 00:35:57.639 { 00:35:57.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:57.639 "dma_device_type": 2 00:35:57.639 } 00:35:57.639 ], 00:35:57.639 "driver_specific": { 00:35:57.639 "raid": { 00:35:57.639 "uuid": "9c209f7e-930d-4140-813c-b3fd3b11e468", 00:35:57.639 "strip_size_kb": 64, 00:35:57.639 "state": "online", 00:35:57.639 "raid_level": "raid0", 00:35:57.639 "superblock": true, 00:35:57.639 "num_base_bdevs": 4, 00:35:57.639 "num_base_bdevs_discovered": 4, 00:35:57.639 "num_base_bdevs_operational": 4, 00:35:57.639 "base_bdevs_list": [ 00:35:57.639 { 00:35:57.639 "name": "pt1", 00:35:57.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:57.639 "is_configured": true, 00:35:57.639 "data_offset": 2048, 00:35:57.639 "data_size": 63488 00:35:57.639 }, 00:35:57.639 { 00:35:57.639 "name": "pt2", 00:35:57.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:57.639 "is_configured": true, 00:35:57.639 "data_offset": 2048, 00:35:57.639 "data_size": 63488 00:35:57.639 }, 00:35:57.639 { 00:35:57.639 "name": "pt3", 00:35:57.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:57.639 "is_configured": true, 00:35:57.639 "data_offset": 2048, 00:35:57.639 "data_size": 63488 00:35:57.639 }, 00:35:57.639 { 00:35:57.639 "name": "pt4", 00:35:57.639 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:57.639 "is_configured": true, 00:35:57.639 "data_offset": 2048, 00:35:57.639 "data_size": 63488 00:35:57.639 } 00:35:57.639 ] 00:35:57.639 } 00:35:57.639 } 00:35:57.639 }' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:57.639 pt2 00:35:57.639 pt3 00:35:57.639 pt4' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.639 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.898 [2024-11-26 17:32:58.400504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9c209f7e-930d-4140-813c-b3fd3b11e468 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9c209f7e-930d-4140-813c-b3fd3b11e468 ']' 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.898 [2024-11-26 17:32:58.444111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:57.898 [2024-11-26 17:32:58.444225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:57.898 [2024-11-26 17:32:58.444361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:57.898 [2024-11-26 17:32:58.444477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:57.898 [2024-11-26 17:32:58.444556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:35:57.898 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.899 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.158 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.159 [2024-11-26 17:32:58.619828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:58.159 [2024-11-26 17:32:58.622080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:58.159 [2024-11-26 17:32:58.622190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:35:58.159 [2024-11-26 17:32:58.622263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:35:58.159 [2024-11-26 17:32:58.622370] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:58.159 [2024-11-26 17:32:58.622478] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:58.159 [2024-11-26 17:32:58.622558] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:35:58.159 [2024-11-26 17:32:58.622637] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:35:58.159 [2024-11-26 17:32:58.622692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:58.159 [2024-11-26 17:32:58.622730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:35:58.159 request: 00:35:58.159 { 00:35:58.159 "name": "raid_bdev1", 00:35:58.159 "raid_level": "raid0", 00:35:58.159 "base_bdevs": [ 00:35:58.159 "malloc1", 00:35:58.159 "malloc2", 00:35:58.159 "malloc3", 00:35:58.159 "malloc4" 00:35:58.159 ], 00:35:58.159 "strip_size_kb": 64, 00:35:58.159 "superblock": false, 00:35:58.159 "method": "bdev_raid_create", 00:35:58.159 "req_id": 1 00:35:58.159 } 00:35:58.159 Got JSON-RPC error response 00:35:58.159 response: 00:35:58.159 { 00:35:58.159 "code": -17, 00:35:58.159 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:58.159 } 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.159 [2024-11-26 17:32:58.683684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:58.159 [2024-11-26 17:32:58.683810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.159 [2024-11-26 17:32:58.683866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:58.159 [2024-11-26 17:32:58.683903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.159 [2024-11-26 17:32:58.686420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.159 [2024-11-26 17:32:58.686509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:58.159 [2024-11-26 17:32:58.686659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:58.159 [2024-11-26 17:32:58.686761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:58.159 pt1 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.159 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:58.159 "name": "raid_bdev1", 00:35:58.159 "uuid": "9c209f7e-930d-4140-813c-b3fd3b11e468", 00:35:58.159 "strip_size_kb": 64, 00:35:58.159 "state": "configuring", 00:35:58.159 "raid_level": "raid0", 00:35:58.159 "superblock": true, 00:35:58.159 "num_base_bdevs": 4, 00:35:58.159 "num_base_bdevs_discovered": 1, 00:35:58.159 "num_base_bdevs_operational": 4, 00:35:58.159 "base_bdevs_list": [ 00:35:58.159 { 00:35:58.159 "name": "pt1", 00:35:58.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:58.159 "is_configured": true, 00:35:58.159 "data_offset": 2048, 00:35:58.159 "data_size": 63488 00:35:58.159 }, 00:35:58.159 { 00:35:58.159 "name": null, 00:35:58.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:58.159 "is_configured": false, 00:35:58.159 "data_offset": 2048, 00:35:58.159 "data_size": 63488 00:35:58.159 }, 00:35:58.159 { 00:35:58.159 "name": null, 00:35:58.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:58.159 "is_configured": false, 00:35:58.159 "data_offset": 2048, 00:35:58.159 "data_size": 63488 00:35:58.159 }, 00:35:58.159 { 00:35:58.160 "name": null, 00:35:58.160 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:58.160 "is_configured": false, 00:35:58.160 "data_offset": 2048, 00:35:58.160 "data_size": 63488 00:35:58.160 } 00:35:58.160 ] 00:35:58.160 }' 00:35:58.160 17:32:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:58.160 17:32:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.418 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:35:58.418 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:58.418 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.418 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.679 [2024-11-26 17:32:59.114995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:58.680 [2024-11-26 17:32:59.115136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.680 [2024-11-26 17:32:59.115190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:58.680 [2024-11-26 17:32:59.115231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.680 [2024-11-26 17:32:59.115776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.680 [2024-11-26 17:32:59.115844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:58.680 [2024-11-26 17:32:59.115977] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:58.680 [2024-11-26 17:32:59.116040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:58.680 pt2 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.680 [2024-11-26 17:32:59.122972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:58.680 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:58.681 "name": "raid_bdev1", 00:35:58.681 "uuid": "9c209f7e-930d-4140-813c-b3fd3b11e468", 00:35:58.681 "strip_size_kb": 64, 00:35:58.681 "state": "configuring", 00:35:58.681 "raid_level": "raid0", 00:35:58.681 "superblock": true, 00:35:58.681 "num_base_bdevs": 4, 00:35:58.681 "num_base_bdevs_discovered": 1, 00:35:58.681 "num_base_bdevs_operational": 4, 00:35:58.681 "base_bdevs_list": [ 00:35:58.681 { 00:35:58.681 "name": "pt1", 00:35:58.681 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:58.681 "is_configured": true, 00:35:58.681 "data_offset": 2048, 00:35:58.681 "data_size": 63488 00:35:58.681 }, 00:35:58.681 { 00:35:58.681 "name": null, 00:35:58.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:58.681 "is_configured": false, 00:35:58.681 "data_offset": 0, 00:35:58.681 "data_size": 63488 00:35:58.681 }, 00:35:58.681 { 00:35:58.681 "name": null, 00:35:58.681 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:58.681 "is_configured": false, 00:35:58.681 "data_offset": 2048, 00:35:58.681 "data_size": 63488 00:35:58.681 }, 00:35:58.681 { 00:35:58.681 "name": null, 00:35:58.681 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:58.681 "is_configured": false, 00:35:58.681 "data_offset": 2048, 00:35:58.681 "data_size": 63488 00:35:58.681 } 00:35:58.681 ] 00:35:58.681 }' 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:58.681 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.939 [2024-11-26 17:32:59.594200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:58.939 [2024-11-26 17:32:59.594287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.939 [2024-11-26 17:32:59.594312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:35:58.939 [2024-11-26 17:32:59.594324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.939 [2024-11-26 17:32:59.594832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.939 [2024-11-26 17:32:59.594853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:58.939 [2024-11-26 17:32:59.594955] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:58.939 [2024-11-26 17:32:59.594981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:58.939 pt2 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.939 [2024-11-26 17:32:59.606173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:58.939 [2024-11-26 17:32:59.606256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.939 [2024-11-26 17:32:59.606281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:58.939 [2024-11-26 17:32:59.606292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.939 [2024-11-26 17:32:59.606815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.939 [2024-11-26 17:32:59.606842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:58.939 [2024-11-26 17:32:59.606938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:58.939 [2024-11-26 17:32:59.606970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:58.939 pt3 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.939 [2024-11-26 17:32:59.618113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:58.939 [2024-11-26 17:32:59.618178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:58.939 [2024-11-26 17:32:59.618202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:35:58.939 [2024-11-26 17:32:59.618213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:58.939 [2024-11-26 17:32:59.618742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:58.939 [2024-11-26 17:32:59.618770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:58.939 [2024-11-26 17:32:59.618864] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:58.939 [2024-11-26 17:32:59.618893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:58.939 [2024-11-26 17:32:59.619058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:58.939 [2024-11-26 17:32:59.619069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:35:58.939 [2024-11-26 17:32:59.619349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:58.939 [2024-11-26 17:32:59.619553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:58.939 [2024-11-26 17:32:59.619571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:58.939 [2024-11-26 17:32:59.619725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:58.939 pt4 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.939 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.940 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.198 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.198 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:59.198 "name": "raid_bdev1", 00:35:59.198 "uuid": "9c209f7e-930d-4140-813c-b3fd3b11e468", 00:35:59.198 "strip_size_kb": 64, 00:35:59.198 "state": "online", 00:35:59.198 "raid_level": "raid0", 00:35:59.198 "superblock": true, 00:35:59.198 "num_base_bdevs": 4, 00:35:59.198 "num_base_bdevs_discovered": 4, 00:35:59.198 "num_base_bdevs_operational": 4, 00:35:59.198 "base_bdevs_list": [ 00:35:59.198 { 00:35:59.198 "name": "pt1", 00:35:59.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:59.198 "is_configured": true, 00:35:59.198 "data_offset": 2048, 00:35:59.198 "data_size": 63488 00:35:59.198 }, 00:35:59.198 { 00:35:59.198 "name": "pt2", 00:35:59.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:59.198 "is_configured": true, 00:35:59.198 "data_offset": 2048, 00:35:59.198 "data_size": 63488 00:35:59.198 }, 00:35:59.198 { 00:35:59.198 "name": "pt3", 00:35:59.198 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:59.198 "is_configured": true, 00:35:59.198 "data_offset": 2048, 00:35:59.198 "data_size": 63488 00:35:59.198 }, 00:35:59.198 { 00:35:59.198 "name": "pt4", 00:35:59.198 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:59.198 "is_configured": true, 00:35:59.198 "data_offset": 2048, 00:35:59.198 "data_size": 63488 00:35:59.198 } 00:35:59.198 ] 00:35:59.198 }' 00:35:59.198 17:32:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:59.198 17:32:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:59.466 [2024-11-26 17:33:00.081778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.466 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:59.466 "name": "raid_bdev1", 00:35:59.466 "aliases": [ 00:35:59.466 "9c209f7e-930d-4140-813c-b3fd3b11e468" 00:35:59.466 ], 00:35:59.466 "product_name": "Raid Volume", 00:35:59.466 "block_size": 512, 00:35:59.466 "num_blocks": 253952, 00:35:59.466 "uuid": "9c209f7e-930d-4140-813c-b3fd3b11e468", 00:35:59.466 "assigned_rate_limits": { 00:35:59.466 "rw_ios_per_sec": 0, 00:35:59.466 "rw_mbytes_per_sec": 0, 00:35:59.466 "r_mbytes_per_sec": 0, 00:35:59.466 "w_mbytes_per_sec": 0 00:35:59.466 }, 00:35:59.466 "claimed": false, 00:35:59.466 "zoned": false, 00:35:59.466 "supported_io_types": { 00:35:59.466 "read": true, 00:35:59.466 "write": true, 00:35:59.466 "unmap": true, 00:35:59.466 "flush": true, 00:35:59.466 "reset": true, 00:35:59.466 "nvme_admin": false, 00:35:59.466 "nvme_io": false, 00:35:59.466 "nvme_io_md": false, 00:35:59.466 "write_zeroes": true, 00:35:59.466 "zcopy": false, 00:35:59.466 "get_zone_info": false, 00:35:59.466 "zone_management": false, 00:35:59.466 "zone_append": false, 00:35:59.466 "compare": false, 00:35:59.466 "compare_and_write": false, 00:35:59.466 "abort": false, 00:35:59.466 "seek_hole": false, 00:35:59.466 "seek_data": false, 00:35:59.466 "copy": false, 00:35:59.467 "nvme_iov_md": false 00:35:59.467 }, 00:35:59.467 "memory_domains": [ 00:35:59.467 { 00:35:59.467 "dma_device_id": "system", 00:35:59.467 "dma_device_type": 1 00:35:59.467 }, 00:35:59.467 { 00:35:59.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.467 "dma_device_type": 2 00:35:59.467 }, 00:35:59.467 { 00:35:59.467 "dma_device_id": "system", 00:35:59.467 "dma_device_type": 1 00:35:59.467 }, 00:35:59.467 { 00:35:59.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.467 "dma_device_type": 2 00:35:59.467 }, 00:35:59.467 { 00:35:59.467 "dma_device_id": "system", 00:35:59.467 "dma_device_type": 1 00:35:59.467 }, 00:35:59.467 { 00:35:59.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.467 "dma_device_type": 2 00:35:59.467 }, 00:35:59.467 { 00:35:59.467 "dma_device_id": "system", 00:35:59.467 "dma_device_type": 1 00:35:59.467 }, 00:35:59.467 { 00:35:59.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.467 "dma_device_type": 2 00:35:59.467 } 00:35:59.467 ], 00:35:59.467 "driver_specific": { 00:35:59.467 "raid": { 00:35:59.467 "uuid": "9c209f7e-930d-4140-813c-b3fd3b11e468", 00:35:59.467 "strip_size_kb": 64, 00:35:59.467 "state": "online", 00:35:59.467 "raid_level": "raid0", 00:35:59.467 "superblock": true, 00:35:59.467 "num_base_bdevs": 4, 00:35:59.467 "num_base_bdevs_discovered": 4, 00:35:59.467 "num_base_bdevs_operational": 4, 00:35:59.467 "base_bdevs_list": [ 00:35:59.467 { 00:35:59.467 "name": "pt1", 00:35:59.467 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:59.467 "is_configured": true, 00:35:59.467 "data_offset": 2048, 00:35:59.467 "data_size": 63488 00:35:59.467 }, 00:35:59.467 { 00:35:59.467 "name": "pt2", 00:35:59.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:59.467 "is_configured": true, 00:35:59.467 "data_offset": 2048, 00:35:59.467 "data_size": 63488 00:35:59.467 }, 00:35:59.467 { 00:35:59.467 "name": "pt3", 00:35:59.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:59.467 "is_configured": true, 00:35:59.467 "data_offset": 2048, 00:35:59.467 "data_size": 63488 00:35:59.467 }, 00:35:59.467 { 00:35:59.467 "name": "pt4", 00:35:59.467 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:59.467 "is_configured": true, 00:35:59.467 "data_offset": 2048, 00:35:59.467 "data_size": 63488 00:35:59.467 } 00:35:59.467 ] 00:35:59.467 } 00:35:59.467 } 00:35:59.467 }' 00:35:59.467 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:59.726 pt2 00:35:59.726 pt3 00:35:59.726 pt4' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.726 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:35:59.726 [2024-11-26 17:33:00.417160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9c209f7e-930d-4140-813c-b3fd3b11e468 '!=' 9c209f7e-930d-4140-813c-b3fd3b11e468 ']' 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70991 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70991 ']' 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70991 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70991 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:59.982 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:59.983 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70991' 00:35:59.983 killing process with pid 70991 00:35:59.983 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70991 00:35:59.983 [2024-11-26 17:33:00.500945] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:59.983 [2024-11-26 17:33:00.501103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:59.983 17:33:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70991 00:35:59.983 [2024-11-26 17:33:00.501224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:59.983 [2024-11-26 17:33:00.501274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:36:00.547 [2024-11-26 17:33:00.940593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:01.486 17:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:36:01.486 00:36:01.486 real 0m5.834s 00:36:01.486 user 0m8.355s 00:36:01.486 sys 0m0.949s 00:36:01.486 17:33:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.486 17:33:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.486 ************************************ 00:36:01.486 END TEST raid_superblock_test 00:36:01.486 ************************************ 00:36:01.747 17:33:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:36:01.747 17:33:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:01.747 17:33:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:01.747 17:33:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:01.747 ************************************ 00:36:01.747 START TEST raid_read_error_test 00:36:01.747 ************************************ 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xNtxyQfvdF 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71259 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71259 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71259 ']' 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:01.747 17:33:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.747 [2024-11-26 17:33:02.312607] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:01.747 [2024-11-26 17:33:02.312757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71259 ] 00:36:02.007 [2024-11-26 17:33:02.488253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.007 [2024-11-26 17:33:02.626769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.266 [2024-11-26 17:33:02.869194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:02.266 [2024-11-26 17:33:02.869242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:02.525 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.525 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:36:02.525 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:02.525 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:02.526 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.526 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.785 BaseBdev1_malloc 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.785 true 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.785 [2024-11-26 17:33:03.266391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:36:02.785 [2024-11-26 17:33:03.266471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:02.785 [2024-11-26 17:33:03.266494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:02.785 [2024-11-26 17:33:03.266506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:02.785 [2024-11-26 17:33:03.268963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:02.785 [2024-11-26 17:33:03.269026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:02.785 BaseBdev1 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.785 BaseBdev2_malloc 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.785 true 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.785 [2024-11-26 17:33:03.340759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:36:02.785 [2024-11-26 17:33:03.340836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:02.785 [2024-11-26 17:33:03.340858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:02.785 [2024-11-26 17:33:03.340871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:02.785 [2024-11-26 17:33:03.343423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:02.785 [2024-11-26 17:33:03.343472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:02.785 BaseBdev2 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.785 BaseBdev3_malloc 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.785 true 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.785 [2024-11-26 17:33:03.427213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:36:02.785 [2024-11-26 17:33:03.427276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:02.785 [2024-11-26 17:33:03.427299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:02.785 [2024-11-26 17:33:03.427312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:02.785 [2024-11-26 17:33:03.429811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:02.785 [2024-11-26 17:33:03.429856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:02.785 BaseBdev3 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.785 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.063 BaseBdev4_malloc 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.063 true 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.063 [2024-11-26 17:33:03.493000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:36:03.063 [2024-11-26 17:33:03.493064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:03.063 [2024-11-26 17:33:03.493084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:03.063 [2024-11-26 17:33:03.493097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:03.063 [2024-11-26 17:33:03.495589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:03.063 [2024-11-26 17:33:03.495635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:36:03.063 BaseBdev4 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.063 [2024-11-26 17:33:03.501061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:03.063 [2024-11-26 17:33:03.503210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:03.063 [2024-11-26 17:33:03.503301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:03.063 [2024-11-26 17:33:03.503381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:03.063 [2024-11-26 17:33:03.503651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:36:03.063 [2024-11-26 17:33:03.503679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:36:03.063 [2024-11-26 17:33:03.503980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:36:03.063 [2024-11-26 17:33:03.504184] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:36:03.063 [2024-11-26 17:33:03.504206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:36:03.063 [2024-11-26 17:33:03.504378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:03.063 "name": "raid_bdev1", 00:36:03.063 "uuid": "b236a1cb-daca-41bc-afde-ab3c534afab4", 00:36:03.063 "strip_size_kb": 64, 00:36:03.063 "state": "online", 00:36:03.063 "raid_level": "raid0", 00:36:03.063 "superblock": true, 00:36:03.063 "num_base_bdevs": 4, 00:36:03.063 "num_base_bdevs_discovered": 4, 00:36:03.063 "num_base_bdevs_operational": 4, 00:36:03.063 "base_bdevs_list": [ 00:36:03.063 { 00:36:03.063 "name": "BaseBdev1", 00:36:03.063 "uuid": "d8446fe0-0561-5d90-a963-2afdaae993a6", 00:36:03.063 "is_configured": true, 00:36:03.063 "data_offset": 2048, 00:36:03.063 "data_size": 63488 00:36:03.063 }, 00:36:03.063 { 00:36:03.063 "name": "BaseBdev2", 00:36:03.063 "uuid": "d428cdf5-a233-54e6-a2ea-830be5a51063", 00:36:03.063 "is_configured": true, 00:36:03.063 "data_offset": 2048, 00:36:03.063 "data_size": 63488 00:36:03.063 }, 00:36:03.063 { 00:36:03.063 "name": "BaseBdev3", 00:36:03.063 "uuid": "4731a1fe-6cd3-525f-b5f4-bb4edbeff7f6", 00:36:03.063 "is_configured": true, 00:36:03.063 "data_offset": 2048, 00:36:03.063 "data_size": 63488 00:36:03.063 }, 00:36:03.063 { 00:36:03.063 "name": "BaseBdev4", 00:36:03.063 "uuid": "09d98b88-01b0-5bd4-9f58-e4ad0a394d00", 00:36:03.063 "is_configured": true, 00:36:03.063 "data_offset": 2048, 00:36:03.063 "data_size": 63488 00:36:03.063 } 00:36:03.063 ] 00:36:03.063 }' 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:03.063 17:33:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.339 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:36:03.339 17:33:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:03.339 [2024-11-26 17:33:04.017921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.276 17:33:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.535 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:04.535 "name": "raid_bdev1", 00:36:04.535 "uuid": "b236a1cb-daca-41bc-afde-ab3c534afab4", 00:36:04.535 "strip_size_kb": 64, 00:36:04.535 "state": "online", 00:36:04.535 "raid_level": "raid0", 00:36:04.535 "superblock": true, 00:36:04.535 "num_base_bdevs": 4, 00:36:04.535 "num_base_bdevs_discovered": 4, 00:36:04.535 "num_base_bdevs_operational": 4, 00:36:04.535 "base_bdevs_list": [ 00:36:04.535 { 00:36:04.535 "name": "BaseBdev1", 00:36:04.535 "uuid": "d8446fe0-0561-5d90-a963-2afdaae993a6", 00:36:04.535 "is_configured": true, 00:36:04.535 "data_offset": 2048, 00:36:04.535 "data_size": 63488 00:36:04.535 }, 00:36:04.535 { 00:36:04.535 "name": "BaseBdev2", 00:36:04.535 "uuid": "d428cdf5-a233-54e6-a2ea-830be5a51063", 00:36:04.535 "is_configured": true, 00:36:04.535 "data_offset": 2048, 00:36:04.535 "data_size": 63488 00:36:04.535 }, 00:36:04.535 { 00:36:04.535 "name": "BaseBdev3", 00:36:04.535 "uuid": "4731a1fe-6cd3-525f-b5f4-bb4edbeff7f6", 00:36:04.535 "is_configured": true, 00:36:04.535 "data_offset": 2048, 00:36:04.535 "data_size": 63488 00:36:04.535 }, 00:36:04.535 { 00:36:04.535 "name": "BaseBdev4", 00:36:04.535 "uuid": "09d98b88-01b0-5bd4-9f58-e4ad0a394d00", 00:36:04.535 "is_configured": true, 00:36:04.535 "data_offset": 2048, 00:36:04.535 "data_size": 63488 00:36:04.535 } 00:36:04.535 ] 00:36:04.535 }' 00:36:04.535 17:33:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:04.535 17:33:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:04.794 [2024-11-26 17:33:05.338956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:04.794 [2024-11-26 17:33:05.338999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:04.794 [2024-11-26 17:33:05.342369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:04.794 [2024-11-26 17:33:05.342440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:04.794 [2024-11-26 17:33:05.342492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:04.794 [2024-11-26 17:33:05.342505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:36:04.794 { 00:36:04.794 "results": [ 00:36:04.794 { 00:36:04.794 "job": "raid_bdev1", 00:36:04.794 "core_mask": "0x1", 00:36:04.794 "workload": "randrw", 00:36:04.794 "percentage": 50, 00:36:04.794 "status": "finished", 00:36:04.794 "queue_depth": 1, 00:36:04.794 "io_size": 131072, 00:36:04.794 "runtime": 1.321538, 00:36:04.794 "iops": 12836.558615794627, 00:36:04.794 "mibps": 1604.5698269743284, 00:36:04.794 "io_failed": 1, 00:36:04.794 "io_timeout": 0, 00:36:04.794 "avg_latency_us": 107.77619867258176, 00:36:04.794 "min_latency_us": 28.841921397379913, 00:36:04.794 "max_latency_us": 1831.5737991266376 00:36:04.794 } 00:36:04.794 ], 00:36:04.794 "core_count": 1 00:36:04.794 } 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71259 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71259 ']' 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71259 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71259 00:36:04.794 killing process with pid 71259 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71259' 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71259 00:36:04.794 17:33:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71259 00:36:04.794 [2024-11-26 17:33:05.388605] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:05.052 [2024-11-26 17:33:05.743593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:06.959 17:33:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:36:06.959 17:33:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xNtxyQfvdF 00:36:06.959 17:33:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:36:06.959 17:33:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:36:06.959 17:33:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:36:06.959 17:33:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:06.959 17:33:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:06.959 17:33:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:36:06.959 00:36:06.959 real 0m4.949s 00:36:06.959 user 0m5.760s 00:36:06.959 sys 0m0.609s 00:36:06.959 17:33:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.959 17:33:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.959 ************************************ 00:36:06.959 END TEST raid_read_error_test 00:36:06.959 ************************************ 00:36:06.959 17:33:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:36:06.959 17:33:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:06.959 17:33:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.959 17:33:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:06.959 ************************************ 00:36:06.959 START TEST raid_write_error_test 00:36:06.959 ************************************ 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:36:06.959 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:36:06.960 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7zeJmQupwm 00:36:06.960 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71403 00:36:06.960 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:36:06.960 17:33:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71403 00:36:06.960 17:33:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71403 ']' 00:36:06.960 17:33:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.960 17:33:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:06.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.960 17:33:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.960 17:33:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:06.960 17:33:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:06.960 [2024-11-26 17:33:07.321992] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:06.960 [2024-11-26 17:33:07.322121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71403 ] 00:36:06.960 [2024-11-26 17:33:07.479126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.960 [2024-11-26 17:33:07.601991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:07.228 [2024-11-26 17:33:07.822645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:07.228 [2024-11-26 17:33:07.822722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:07.797 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.797 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:36:07.797 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.798 BaseBdev1_malloc 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.798 true 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.798 [2024-11-26 17:33:08.278407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:36:07.798 [2024-11-26 17:33:08.278469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:07.798 [2024-11-26 17:33:08.278494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:07.798 [2024-11-26 17:33:08.278507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:07.798 [2024-11-26 17:33:08.280985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:07.798 [2024-11-26 17:33:08.281031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:07.798 BaseBdev1 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.798 BaseBdev2_malloc 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.798 true 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.798 [2024-11-26 17:33:08.351288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:36:07.798 [2024-11-26 17:33:08.351348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:07.798 [2024-11-26 17:33:08.351368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:07.798 [2024-11-26 17:33:08.351379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:07.798 [2024-11-26 17:33:08.353784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:07.798 [2024-11-26 17:33:08.353825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:07.798 BaseBdev2 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.798 BaseBdev3_malloc 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.798 true 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:07.798 [2024-11-26 17:33:08.443780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:36:07.798 [2024-11-26 17:33:08.443834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:07.798 [2024-11-26 17:33:08.443853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:07.798 [2024-11-26 17:33:08.443864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:07.798 [2024-11-26 17:33:08.446209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:07.798 [2024-11-26 17:33:08.446248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:07.798 BaseBdev3 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.798 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:08.058 BaseBdev4_malloc 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:08.058 true 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:08.058 [2024-11-26 17:33:08.510816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:36:08.058 [2024-11-26 17:33:08.510887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:08.058 [2024-11-26 17:33:08.510906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:08.058 [2024-11-26 17:33:08.510916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:08.058 [2024-11-26 17:33:08.513018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:08.058 [2024-11-26 17:33:08.513059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:36:08.058 BaseBdev4 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.058 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:08.058 [2024-11-26 17:33:08.522873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:08.058 [2024-11-26 17:33:08.524761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:08.058 [2024-11-26 17:33:08.524844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:08.058 [2024-11-26 17:33:08.524909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:08.059 [2024-11-26 17:33:08.525157] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:36:08.059 [2024-11-26 17:33:08.525182] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:36:08.059 [2024-11-26 17:33:08.525453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:36:08.059 [2024-11-26 17:33:08.525642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:36:08.059 [2024-11-26 17:33:08.525661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:36:08.059 [2024-11-26 17:33:08.525841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:08.059 "name": "raid_bdev1", 00:36:08.059 "uuid": "37e693dc-cdbd-4202-80ce-09b701776018", 00:36:08.059 "strip_size_kb": 64, 00:36:08.059 "state": "online", 00:36:08.059 "raid_level": "raid0", 00:36:08.059 "superblock": true, 00:36:08.059 "num_base_bdevs": 4, 00:36:08.059 "num_base_bdevs_discovered": 4, 00:36:08.059 "num_base_bdevs_operational": 4, 00:36:08.059 "base_bdevs_list": [ 00:36:08.059 { 00:36:08.059 "name": "BaseBdev1", 00:36:08.059 "uuid": "9d9480f3-0cce-5188-98e8-e95e3c23b3ad", 00:36:08.059 "is_configured": true, 00:36:08.059 "data_offset": 2048, 00:36:08.059 "data_size": 63488 00:36:08.059 }, 00:36:08.059 { 00:36:08.059 "name": "BaseBdev2", 00:36:08.059 "uuid": "b0f08475-cc6a-5f5b-a747-0a4f8ad94544", 00:36:08.059 "is_configured": true, 00:36:08.059 "data_offset": 2048, 00:36:08.059 "data_size": 63488 00:36:08.059 }, 00:36:08.059 { 00:36:08.059 "name": "BaseBdev3", 00:36:08.059 "uuid": "d2d57bb3-8e04-5f05-b9fb-3349fd48561d", 00:36:08.059 "is_configured": true, 00:36:08.059 "data_offset": 2048, 00:36:08.059 "data_size": 63488 00:36:08.059 }, 00:36:08.059 { 00:36:08.059 "name": "BaseBdev4", 00:36:08.059 "uuid": "8cb23303-62a1-588c-b853-068fe5c25286", 00:36:08.059 "is_configured": true, 00:36:08.059 "data_offset": 2048, 00:36:08.059 "data_size": 63488 00:36:08.059 } 00:36:08.059 ] 00:36:08.059 }' 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:08.059 17:33:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:08.318 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:36:08.319 17:33:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:08.580 [2024-11-26 17:33:09.083530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.519 17:33:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:09.519 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.519 17:33:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:09.519 "name": "raid_bdev1", 00:36:09.519 "uuid": "37e693dc-cdbd-4202-80ce-09b701776018", 00:36:09.519 "strip_size_kb": 64, 00:36:09.519 "state": "online", 00:36:09.519 "raid_level": "raid0", 00:36:09.519 "superblock": true, 00:36:09.519 "num_base_bdevs": 4, 00:36:09.519 "num_base_bdevs_discovered": 4, 00:36:09.519 "num_base_bdevs_operational": 4, 00:36:09.519 "base_bdevs_list": [ 00:36:09.519 { 00:36:09.519 "name": "BaseBdev1", 00:36:09.519 "uuid": "9d9480f3-0cce-5188-98e8-e95e3c23b3ad", 00:36:09.519 "is_configured": true, 00:36:09.519 "data_offset": 2048, 00:36:09.519 "data_size": 63488 00:36:09.519 }, 00:36:09.519 { 00:36:09.519 "name": "BaseBdev2", 00:36:09.519 "uuid": "b0f08475-cc6a-5f5b-a747-0a4f8ad94544", 00:36:09.519 "is_configured": true, 00:36:09.519 "data_offset": 2048, 00:36:09.519 "data_size": 63488 00:36:09.519 }, 00:36:09.519 { 00:36:09.519 "name": "BaseBdev3", 00:36:09.519 "uuid": "d2d57bb3-8e04-5f05-b9fb-3349fd48561d", 00:36:09.519 "is_configured": true, 00:36:09.519 "data_offset": 2048, 00:36:09.519 "data_size": 63488 00:36:09.519 }, 00:36:09.519 { 00:36:09.519 "name": "BaseBdev4", 00:36:09.519 "uuid": "8cb23303-62a1-588c-b853-068fe5c25286", 00:36:09.519 "is_configured": true, 00:36:09.519 "data_offset": 2048, 00:36:09.519 "data_size": 63488 00:36:09.519 } 00:36:09.519 ] 00:36:09.519 }' 00:36:09.519 17:33:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:09.519 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:10.089 [2024-11-26 17:33:10.490787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:10.089 [2024-11-26 17:33:10.490830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:10.089 [2024-11-26 17:33:10.494095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:10.089 [2024-11-26 17:33:10.494169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:10.089 [2024-11-26 17:33:10.494221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:10.089 [2024-11-26 17:33:10.494234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:36:10.089 { 00:36:10.089 "results": [ 00:36:10.089 { 00:36:10.089 "job": "raid_bdev1", 00:36:10.089 "core_mask": "0x1", 00:36:10.089 "workload": "randrw", 00:36:10.089 "percentage": 50, 00:36:10.089 "status": "finished", 00:36:10.089 "queue_depth": 1, 00:36:10.089 "io_size": 131072, 00:36:10.089 "runtime": 1.407982, 00:36:10.089 "iops": 13919.922271733587, 00:36:10.089 "mibps": 1739.9902839666984, 00:36:10.089 "io_failed": 1, 00:36:10.089 "io_timeout": 0, 00:36:10.089 "avg_latency_us": 99.50735228589252, 00:36:10.089 "min_latency_us": 27.72401746724891, 00:36:10.089 "max_latency_us": 1760.0279475982534 00:36:10.089 } 00:36:10.089 ], 00:36:10.089 "core_count": 1 00:36:10.089 } 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71403 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71403 ']' 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71403 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71403 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:10.089 killing process with pid 71403 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71403' 00:36:10.089 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71403 00:36:10.089 [2024-11-26 17:33:10.540202] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:10.090 17:33:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71403 00:36:10.350 [2024-11-26 17:33:10.938097] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:11.728 17:33:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7zeJmQupwm 00:36:11.728 17:33:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:36:11.728 17:33:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:36:11.728 17:33:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:36:11.728 17:33:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:36:11.728 17:33:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:11.728 17:33:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:11.728 17:33:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:36:11.728 00:36:11.728 real 0m5.176s 00:36:11.728 user 0m6.179s 00:36:11.728 sys 0m0.569s 00:36:11.728 17:33:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.728 17:33:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:11.728 ************************************ 00:36:11.728 END TEST raid_write_error_test 00:36:11.728 ************************************ 00:36:11.988 17:33:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:36:11.988 17:33:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:36:11.988 17:33:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:11.988 17:33:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.988 17:33:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:11.988 ************************************ 00:36:11.988 START TEST raid_state_function_test 00:36:11.988 ************************************ 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71554 00:36:11.988 Process raid pid: 71554 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71554' 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71554 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71554 ']' 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:11.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:11.988 17:33:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:11.988 [2024-11-26 17:33:12.570977] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:11.988 [2024-11-26 17:33:12.571150] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:12.247 [2024-11-26 17:33:12.756874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.247 [2024-11-26 17:33:12.892121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.507 [2024-11-26 17:33:13.141020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:12.507 [2024-11-26 17:33:13.141068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.076 [2024-11-26 17:33:13.518793] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:13.076 [2024-11-26 17:33:13.518852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:13.076 [2024-11-26 17:33:13.518865] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:13.076 [2024-11-26 17:33:13.518877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:13.076 [2024-11-26 17:33:13.518885] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:13.076 [2024-11-26 17:33:13.518895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:13.076 [2024-11-26 17:33:13.518902] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:13.076 [2024-11-26 17:33:13.518912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:13.076 "name": "Existed_Raid", 00:36:13.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.076 "strip_size_kb": 64, 00:36:13.076 "state": "configuring", 00:36:13.076 "raid_level": "concat", 00:36:13.076 "superblock": false, 00:36:13.076 "num_base_bdevs": 4, 00:36:13.076 "num_base_bdevs_discovered": 0, 00:36:13.076 "num_base_bdevs_operational": 4, 00:36:13.076 "base_bdevs_list": [ 00:36:13.076 { 00:36:13.076 "name": "BaseBdev1", 00:36:13.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.076 "is_configured": false, 00:36:13.076 "data_offset": 0, 00:36:13.076 "data_size": 0 00:36:13.076 }, 00:36:13.076 { 00:36:13.076 "name": "BaseBdev2", 00:36:13.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.076 "is_configured": false, 00:36:13.076 "data_offset": 0, 00:36:13.076 "data_size": 0 00:36:13.076 }, 00:36:13.076 { 00:36:13.076 "name": "BaseBdev3", 00:36:13.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.076 "is_configured": false, 00:36:13.076 "data_offset": 0, 00:36:13.076 "data_size": 0 00:36:13.076 }, 00:36:13.076 { 00:36:13.076 "name": "BaseBdev4", 00:36:13.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.076 "is_configured": false, 00:36:13.076 "data_offset": 0, 00:36:13.076 "data_size": 0 00:36:13.076 } 00:36:13.076 ] 00:36:13.076 }' 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:13.076 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.336 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:13.336 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.336 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.336 [2024-11-26 17:33:13.997922] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:13.336 [2024-11-26 17:33:13.997973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:13.336 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.336 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:13.336 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.336 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.336 [2024-11-26 17:33:14.005904] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:13.336 [2024-11-26 17:33:14.005953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:13.336 [2024-11-26 17:33:14.005964] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:13.336 [2024-11-26 17:33:14.005975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:13.336 [2024-11-26 17:33:14.005983] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:13.336 [2024-11-26 17:33:14.005993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:13.336 [2024-11-26 17:33:14.006001] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:13.336 [2024-11-26 17:33:14.006011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:13.336 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.336 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:13.336 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.336 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.595 [2024-11-26 17:33:14.057728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:13.595 BaseBdev1 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:13.595 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.596 [ 00:36:13.596 { 00:36:13.596 "name": "BaseBdev1", 00:36:13.596 "aliases": [ 00:36:13.596 "b0edddec-4a65-4d7f-a7f9-77d725a5d25b" 00:36:13.596 ], 00:36:13.596 "product_name": "Malloc disk", 00:36:13.596 "block_size": 512, 00:36:13.596 "num_blocks": 65536, 00:36:13.596 "uuid": "b0edddec-4a65-4d7f-a7f9-77d725a5d25b", 00:36:13.596 "assigned_rate_limits": { 00:36:13.596 "rw_ios_per_sec": 0, 00:36:13.596 "rw_mbytes_per_sec": 0, 00:36:13.596 "r_mbytes_per_sec": 0, 00:36:13.596 "w_mbytes_per_sec": 0 00:36:13.596 }, 00:36:13.596 "claimed": true, 00:36:13.596 "claim_type": "exclusive_write", 00:36:13.596 "zoned": false, 00:36:13.596 "supported_io_types": { 00:36:13.596 "read": true, 00:36:13.596 "write": true, 00:36:13.596 "unmap": true, 00:36:13.596 "flush": true, 00:36:13.596 "reset": true, 00:36:13.596 "nvme_admin": false, 00:36:13.596 "nvme_io": false, 00:36:13.596 "nvme_io_md": false, 00:36:13.596 "write_zeroes": true, 00:36:13.596 "zcopy": true, 00:36:13.596 "get_zone_info": false, 00:36:13.596 "zone_management": false, 00:36:13.596 "zone_append": false, 00:36:13.596 "compare": false, 00:36:13.596 "compare_and_write": false, 00:36:13.596 "abort": true, 00:36:13.596 "seek_hole": false, 00:36:13.596 "seek_data": false, 00:36:13.596 "copy": true, 00:36:13.596 "nvme_iov_md": false 00:36:13.596 }, 00:36:13.596 "memory_domains": [ 00:36:13.596 { 00:36:13.596 "dma_device_id": "system", 00:36:13.596 "dma_device_type": 1 00:36:13.596 }, 00:36:13.596 { 00:36:13.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:13.596 "dma_device_type": 2 00:36:13.596 } 00:36:13.596 ], 00:36:13.596 "driver_specific": {} 00:36:13.596 } 00:36:13.596 ] 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:13.596 "name": "Existed_Raid", 00:36:13.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.596 "strip_size_kb": 64, 00:36:13.596 "state": "configuring", 00:36:13.596 "raid_level": "concat", 00:36:13.596 "superblock": false, 00:36:13.596 "num_base_bdevs": 4, 00:36:13.596 "num_base_bdevs_discovered": 1, 00:36:13.596 "num_base_bdevs_operational": 4, 00:36:13.596 "base_bdevs_list": [ 00:36:13.596 { 00:36:13.596 "name": "BaseBdev1", 00:36:13.596 "uuid": "b0edddec-4a65-4d7f-a7f9-77d725a5d25b", 00:36:13.596 "is_configured": true, 00:36:13.596 "data_offset": 0, 00:36:13.596 "data_size": 65536 00:36:13.596 }, 00:36:13.596 { 00:36:13.596 "name": "BaseBdev2", 00:36:13.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.596 "is_configured": false, 00:36:13.596 "data_offset": 0, 00:36:13.596 "data_size": 0 00:36:13.596 }, 00:36:13.596 { 00:36:13.596 "name": "BaseBdev3", 00:36:13.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.596 "is_configured": false, 00:36:13.596 "data_offset": 0, 00:36:13.596 "data_size": 0 00:36:13.596 }, 00:36:13.596 { 00:36:13.596 "name": "BaseBdev4", 00:36:13.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.596 "is_configured": false, 00:36:13.596 "data_offset": 0, 00:36:13.596 "data_size": 0 00:36:13.596 } 00:36:13.596 ] 00:36:13.596 }' 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:13.596 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.163 [2024-11-26 17:33:14.564933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:14.163 [2024-11-26 17:33:14.565000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.163 [2024-11-26 17:33:14.576988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:14.163 [2024-11-26 17:33:14.579105] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:14.163 [2024-11-26 17:33:14.579150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:14.163 [2024-11-26 17:33:14.579162] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:14.163 [2024-11-26 17:33:14.579175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:14.163 [2024-11-26 17:33:14.579183] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:14.163 [2024-11-26 17:33:14.579194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.163 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:14.163 "name": "Existed_Raid", 00:36:14.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.163 "strip_size_kb": 64, 00:36:14.163 "state": "configuring", 00:36:14.163 "raid_level": "concat", 00:36:14.163 "superblock": false, 00:36:14.163 "num_base_bdevs": 4, 00:36:14.163 "num_base_bdevs_discovered": 1, 00:36:14.163 "num_base_bdevs_operational": 4, 00:36:14.163 "base_bdevs_list": [ 00:36:14.163 { 00:36:14.163 "name": "BaseBdev1", 00:36:14.163 "uuid": "b0edddec-4a65-4d7f-a7f9-77d725a5d25b", 00:36:14.163 "is_configured": true, 00:36:14.163 "data_offset": 0, 00:36:14.163 "data_size": 65536 00:36:14.163 }, 00:36:14.163 { 00:36:14.163 "name": "BaseBdev2", 00:36:14.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.163 "is_configured": false, 00:36:14.163 "data_offset": 0, 00:36:14.163 "data_size": 0 00:36:14.163 }, 00:36:14.163 { 00:36:14.163 "name": "BaseBdev3", 00:36:14.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.163 "is_configured": false, 00:36:14.163 "data_offset": 0, 00:36:14.163 "data_size": 0 00:36:14.163 }, 00:36:14.163 { 00:36:14.164 "name": "BaseBdev4", 00:36:14.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.164 "is_configured": false, 00:36:14.164 "data_offset": 0, 00:36:14.164 "data_size": 0 00:36:14.164 } 00:36:14.164 ] 00:36:14.164 }' 00:36:14.164 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:14.164 17:33:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.423 [2024-11-26 17:33:15.057651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:14.423 BaseBdev2 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.423 [ 00:36:14.423 { 00:36:14.423 "name": "BaseBdev2", 00:36:14.423 "aliases": [ 00:36:14.423 "36b89356-adc9-48c1-8890-b269d1ca2798" 00:36:14.423 ], 00:36:14.423 "product_name": "Malloc disk", 00:36:14.423 "block_size": 512, 00:36:14.423 "num_blocks": 65536, 00:36:14.423 "uuid": "36b89356-adc9-48c1-8890-b269d1ca2798", 00:36:14.423 "assigned_rate_limits": { 00:36:14.423 "rw_ios_per_sec": 0, 00:36:14.423 "rw_mbytes_per_sec": 0, 00:36:14.423 "r_mbytes_per_sec": 0, 00:36:14.423 "w_mbytes_per_sec": 0 00:36:14.423 }, 00:36:14.423 "claimed": true, 00:36:14.423 "claim_type": "exclusive_write", 00:36:14.423 "zoned": false, 00:36:14.423 "supported_io_types": { 00:36:14.423 "read": true, 00:36:14.423 "write": true, 00:36:14.423 "unmap": true, 00:36:14.423 "flush": true, 00:36:14.423 "reset": true, 00:36:14.423 "nvme_admin": false, 00:36:14.423 "nvme_io": false, 00:36:14.423 "nvme_io_md": false, 00:36:14.423 "write_zeroes": true, 00:36:14.423 "zcopy": true, 00:36:14.423 "get_zone_info": false, 00:36:14.423 "zone_management": false, 00:36:14.423 "zone_append": false, 00:36:14.423 "compare": false, 00:36:14.423 "compare_and_write": false, 00:36:14.423 "abort": true, 00:36:14.423 "seek_hole": false, 00:36:14.423 "seek_data": false, 00:36:14.423 "copy": true, 00:36:14.423 "nvme_iov_md": false 00:36:14.423 }, 00:36:14.423 "memory_domains": [ 00:36:14.423 { 00:36:14.423 "dma_device_id": "system", 00:36:14.423 "dma_device_type": 1 00:36:14.423 }, 00:36:14.423 { 00:36:14.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:14.423 "dma_device_type": 2 00:36:14.423 } 00:36:14.423 ], 00:36:14.423 "driver_specific": {} 00:36:14.423 } 00:36:14.423 ] 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.423 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.682 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.682 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:14.682 "name": "Existed_Raid", 00:36:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.682 "strip_size_kb": 64, 00:36:14.682 "state": "configuring", 00:36:14.682 "raid_level": "concat", 00:36:14.682 "superblock": false, 00:36:14.682 "num_base_bdevs": 4, 00:36:14.682 "num_base_bdevs_discovered": 2, 00:36:14.682 "num_base_bdevs_operational": 4, 00:36:14.682 "base_bdevs_list": [ 00:36:14.682 { 00:36:14.682 "name": "BaseBdev1", 00:36:14.682 "uuid": "b0edddec-4a65-4d7f-a7f9-77d725a5d25b", 00:36:14.682 "is_configured": true, 00:36:14.682 "data_offset": 0, 00:36:14.682 "data_size": 65536 00:36:14.682 }, 00:36:14.682 { 00:36:14.682 "name": "BaseBdev2", 00:36:14.682 "uuid": "36b89356-adc9-48c1-8890-b269d1ca2798", 00:36:14.682 "is_configured": true, 00:36:14.682 "data_offset": 0, 00:36:14.682 "data_size": 65536 00:36:14.682 }, 00:36:14.682 { 00:36:14.682 "name": "BaseBdev3", 00:36:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.682 "is_configured": false, 00:36:14.682 "data_offset": 0, 00:36:14.682 "data_size": 0 00:36:14.682 }, 00:36:14.682 { 00:36:14.682 "name": "BaseBdev4", 00:36:14.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.682 "is_configured": false, 00:36:14.682 "data_offset": 0, 00:36:14.682 "data_size": 0 00:36:14.682 } 00:36:14.682 ] 00:36:14.682 }' 00:36:14.682 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:14.682 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.940 [2024-11-26 17:33:15.595845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:14.940 BaseBdev3 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.940 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.940 [ 00:36:14.940 { 00:36:14.940 "name": "BaseBdev3", 00:36:14.940 "aliases": [ 00:36:14.940 "d70d8621-7f1b-49a9-8500-e7fe560d6715" 00:36:14.940 ], 00:36:14.940 "product_name": "Malloc disk", 00:36:14.940 "block_size": 512, 00:36:14.940 "num_blocks": 65536, 00:36:14.940 "uuid": "d70d8621-7f1b-49a9-8500-e7fe560d6715", 00:36:14.940 "assigned_rate_limits": { 00:36:14.940 "rw_ios_per_sec": 0, 00:36:14.940 "rw_mbytes_per_sec": 0, 00:36:14.940 "r_mbytes_per_sec": 0, 00:36:14.940 "w_mbytes_per_sec": 0 00:36:14.940 }, 00:36:14.940 "claimed": true, 00:36:14.940 "claim_type": "exclusive_write", 00:36:14.940 "zoned": false, 00:36:14.940 "supported_io_types": { 00:36:14.940 "read": true, 00:36:14.940 "write": true, 00:36:14.940 "unmap": true, 00:36:14.940 "flush": true, 00:36:14.940 "reset": true, 00:36:14.940 "nvme_admin": false, 00:36:14.940 "nvme_io": false, 00:36:14.940 "nvme_io_md": false, 00:36:14.940 "write_zeroes": true, 00:36:14.940 "zcopy": true, 00:36:14.940 "get_zone_info": false, 00:36:14.940 "zone_management": false, 00:36:14.940 "zone_append": false, 00:36:14.940 "compare": false, 00:36:14.940 "compare_and_write": false, 00:36:14.940 "abort": true, 00:36:14.940 "seek_hole": false, 00:36:14.940 "seek_data": false, 00:36:14.940 "copy": true, 00:36:14.940 "nvme_iov_md": false 00:36:14.940 }, 00:36:14.940 "memory_domains": [ 00:36:14.940 { 00:36:14.940 "dma_device_id": "system", 00:36:14.940 "dma_device_type": 1 00:36:14.940 }, 00:36:14.940 { 00:36:14.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:14.940 "dma_device_type": 2 00:36:14.940 } 00:36:15.214 ], 00:36:15.214 "driver_specific": {} 00:36:15.214 } 00:36:15.214 ] 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:15.214 "name": "Existed_Raid", 00:36:15.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:15.214 "strip_size_kb": 64, 00:36:15.214 "state": "configuring", 00:36:15.214 "raid_level": "concat", 00:36:15.214 "superblock": false, 00:36:15.214 "num_base_bdevs": 4, 00:36:15.214 "num_base_bdevs_discovered": 3, 00:36:15.214 "num_base_bdevs_operational": 4, 00:36:15.214 "base_bdevs_list": [ 00:36:15.214 { 00:36:15.214 "name": "BaseBdev1", 00:36:15.214 "uuid": "b0edddec-4a65-4d7f-a7f9-77d725a5d25b", 00:36:15.214 "is_configured": true, 00:36:15.214 "data_offset": 0, 00:36:15.214 "data_size": 65536 00:36:15.214 }, 00:36:15.214 { 00:36:15.214 "name": "BaseBdev2", 00:36:15.214 "uuid": "36b89356-adc9-48c1-8890-b269d1ca2798", 00:36:15.214 "is_configured": true, 00:36:15.214 "data_offset": 0, 00:36:15.214 "data_size": 65536 00:36:15.214 }, 00:36:15.214 { 00:36:15.214 "name": "BaseBdev3", 00:36:15.214 "uuid": "d70d8621-7f1b-49a9-8500-e7fe560d6715", 00:36:15.214 "is_configured": true, 00:36:15.214 "data_offset": 0, 00:36:15.214 "data_size": 65536 00:36:15.214 }, 00:36:15.214 { 00:36:15.214 "name": "BaseBdev4", 00:36:15.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:15.214 "is_configured": false, 00:36:15.214 "data_offset": 0, 00:36:15.214 "data_size": 0 00:36:15.214 } 00:36:15.214 ] 00:36:15.214 }' 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:15.214 17:33:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.494 [2024-11-26 17:33:16.145101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:15.494 [2024-11-26 17:33:16.145158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:15.494 [2024-11-26 17:33:16.145168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:36:15.494 [2024-11-26 17:33:16.145458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:15.494 [2024-11-26 17:33:16.145661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:15.494 [2024-11-26 17:33:16.145693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:15.494 [2024-11-26 17:33:16.145972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:15.494 BaseBdev4 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.494 [ 00:36:15.494 { 00:36:15.494 "name": "BaseBdev4", 00:36:15.494 "aliases": [ 00:36:15.494 "6995626f-39a7-4bc8-a8f5-8ffc6c40c0df" 00:36:15.494 ], 00:36:15.494 "product_name": "Malloc disk", 00:36:15.494 "block_size": 512, 00:36:15.494 "num_blocks": 65536, 00:36:15.494 "uuid": "6995626f-39a7-4bc8-a8f5-8ffc6c40c0df", 00:36:15.494 "assigned_rate_limits": { 00:36:15.494 "rw_ios_per_sec": 0, 00:36:15.494 "rw_mbytes_per_sec": 0, 00:36:15.494 "r_mbytes_per_sec": 0, 00:36:15.494 "w_mbytes_per_sec": 0 00:36:15.494 }, 00:36:15.494 "claimed": true, 00:36:15.494 "claim_type": "exclusive_write", 00:36:15.494 "zoned": false, 00:36:15.494 "supported_io_types": { 00:36:15.494 "read": true, 00:36:15.494 "write": true, 00:36:15.494 "unmap": true, 00:36:15.494 "flush": true, 00:36:15.494 "reset": true, 00:36:15.494 "nvme_admin": false, 00:36:15.494 "nvme_io": false, 00:36:15.494 "nvme_io_md": false, 00:36:15.494 "write_zeroes": true, 00:36:15.494 "zcopy": true, 00:36:15.494 "get_zone_info": false, 00:36:15.494 "zone_management": false, 00:36:15.494 "zone_append": false, 00:36:15.494 "compare": false, 00:36:15.494 "compare_and_write": false, 00:36:15.494 "abort": true, 00:36:15.494 "seek_hole": false, 00:36:15.494 "seek_data": false, 00:36:15.494 "copy": true, 00:36:15.494 "nvme_iov_md": false 00:36:15.494 }, 00:36:15.494 "memory_domains": [ 00:36:15.494 { 00:36:15.494 "dma_device_id": "system", 00:36:15.494 "dma_device_type": 1 00:36:15.494 }, 00:36:15.494 { 00:36:15.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:15.494 "dma_device_type": 2 00:36:15.494 } 00:36:15.494 ], 00:36:15.494 "driver_specific": {} 00:36:15.494 } 00:36:15.494 ] 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:15.494 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:15.754 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:15.754 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:15.754 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.754 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.754 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.754 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:15.754 "name": "Existed_Raid", 00:36:15.754 "uuid": "f5edea9e-6170-4f12-a34f-40d9b2da34e5", 00:36:15.754 "strip_size_kb": 64, 00:36:15.754 "state": "online", 00:36:15.754 "raid_level": "concat", 00:36:15.754 "superblock": false, 00:36:15.754 "num_base_bdevs": 4, 00:36:15.754 "num_base_bdevs_discovered": 4, 00:36:15.754 "num_base_bdevs_operational": 4, 00:36:15.754 "base_bdevs_list": [ 00:36:15.754 { 00:36:15.754 "name": "BaseBdev1", 00:36:15.754 "uuid": "b0edddec-4a65-4d7f-a7f9-77d725a5d25b", 00:36:15.754 "is_configured": true, 00:36:15.754 "data_offset": 0, 00:36:15.754 "data_size": 65536 00:36:15.754 }, 00:36:15.754 { 00:36:15.754 "name": "BaseBdev2", 00:36:15.754 "uuid": "36b89356-adc9-48c1-8890-b269d1ca2798", 00:36:15.754 "is_configured": true, 00:36:15.754 "data_offset": 0, 00:36:15.754 "data_size": 65536 00:36:15.754 }, 00:36:15.754 { 00:36:15.754 "name": "BaseBdev3", 00:36:15.754 "uuid": "d70d8621-7f1b-49a9-8500-e7fe560d6715", 00:36:15.754 "is_configured": true, 00:36:15.754 "data_offset": 0, 00:36:15.754 "data_size": 65536 00:36:15.754 }, 00:36:15.754 { 00:36:15.754 "name": "BaseBdev4", 00:36:15.754 "uuid": "6995626f-39a7-4bc8-a8f5-8ffc6c40c0df", 00:36:15.754 "is_configured": true, 00:36:15.754 "data_offset": 0, 00:36:15.754 "data_size": 65536 00:36:15.754 } 00:36:15.754 ] 00:36:15.754 }' 00:36:15.754 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:15.754 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.013 [2024-11-26 17:33:16.640742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:16.013 "name": "Existed_Raid", 00:36:16.013 "aliases": [ 00:36:16.013 "f5edea9e-6170-4f12-a34f-40d9b2da34e5" 00:36:16.013 ], 00:36:16.013 "product_name": "Raid Volume", 00:36:16.013 "block_size": 512, 00:36:16.013 "num_blocks": 262144, 00:36:16.013 "uuid": "f5edea9e-6170-4f12-a34f-40d9b2da34e5", 00:36:16.013 "assigned_rate_limits": { 00:36:16.013 "rw_ios_per_sec": 0, 00:36:16.013 "rw_mbytes_per_sec": 0, 00:36:16.013 "r_mbytes_per_sec": 0, 00:36:16.013 "w_mbytes_per_sec": 0 00:36:16.013 }, 00:36:16.013 "claimed": false, 00:36:16.013 "zoned": false, 00:36:16.013 "supported_io_types": { 00:36:16.013 "read": true, 00:36:16.013 "write": true, 00:36:16.013 "unmap": true, 00:36:16.013 "flush": true, 00:36:16.013 "reset": true, 00:36:16.013 "nvme_admin": false, 00:36:16.013 "nvme_io": false, 00:36:16.013 "nvme_io_md": false, 00:36:16.013 "write_zeroes": true, 00:36:16.013 "zcopy": false, 00:36:16.013 "get_zone_info": false, 00:36:16.013 "zone_management": false, 00:36:16.013 "zone_append": false, 00:36:16.013 "compare": false, 00:36:16.013 "compare_and_write": false, 00:36:16.013 "abort": false, 00:36:16.013 "seek_hole": false, 00:36:16.013 "seek_data": false, 00:36:16.013 "copy": false, 00:36:16.013 "nvme_iov_md": false 00:36:16.013 }, 00:36:16.013 "memory_domains": [ 00:36:16.013 { 00:36:16.013 "dma_device_id": "system", 00:36:16.013 "dma_device_type": 1 00:36:16.013 }, 00:36:16.013 { 00:36:16.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.013 "dma_device_type": 2 00:36:16.013 }, 00:36:16.013 { 00:36:16.013 "dma_device_id": "system", 00:36:16.013 "dma_device_type": 1 00:36:16.013 }, 00:36:16.013 { 00:36:16.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.013 "dma_device_type": 2 00:36:16.013 }, 00:36:16.013 { 00:36:16.013 "dma_device_id": "system", 00:36:16.013 "dma_device_type": 1 00:36:16.013 }, 00:36:16.013 { 00:36:16.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.013 "dma_device_type": 2 00:36:16.013 }, 00:36:16.013 { 00:36:16.013 "dma_device_id": "system", 00:36:16.013 "dma_device_type": 1 00:36:16.013 }, 00:36:16.013 { 00:36:16.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.013 "dma_device_type": 2 00:36:16.013 } 00:36:16.013 ], 00:36:16.013 "driver_specific": { 00:36:16.013 "raid": { 00:36:16.013 "uuid": "f5edea9e-6170-4f12-a34f-40d9b2da34e5", 00:36:16.013 "strip_size_kb": 64, 00:36:16.013 "state": "online", 00:36:16.013 "raid_level": "concat", 00:36:16.013 "superblock": false, 00:36:16.013 "num_base_bdevs": 4, 00:36:16.013 "num_base_bdevs_discovered": 4, 00:36:16.013 "num_base_bdevs_operational": 4, 00:36:16.013 "base_bdevs_list": [ 00:36:16.013 { 00:36:16.013 "name": "BaseBdev1", 00:36:16.013 "uuid": "b0edddec-4a65-4d7f-a7f9-77d725a5d25b", 00:36:16.013 "is_configured": true, 00:36:16.013 "data_offset": 0, 00:36:16.013 "data_size": 65536 00:36:16.013 }, 00:36:16.013 { 00:36:16.013 "name": "BaseBdev2", 00:36:16.013 "uuid": "36b89356-adc9-48c1-8890-b269d1ca2798", 00:36:16.013 "is_configured": true, 00:36:16.013 "data_offset": 0, 00:36:16.013 "data_size": 65536 00:36:16.013 }, 00:36:16.013 { 00:36:16.013 "name": "BaseBdev3", 00:36:16.013 "uuid": "d70d8621-7f1b-49a9-8500-e7fe560d6715", 00:36:16.013 "is_configured": true, 00:36:16.013 "data_offset": 0, 00:36:16.013 "data_size": 65536 00:36:16.013 }, 00:36:16.013 { 00:36:16.013 "name": "BaseBdev4", 00:36:16.013 "uuid": "6995626f-39a7-4bc8-a8f5-8ffc6c40c0df", 00:36:16.013 "is_configured": true, 00:36:16.013 "data_offset": 0, 00:36:16.013 "data_size": 65536 00:36:16.013 } 00:36:16.013 ] 00:36:16.013 } 00:36:16.013 } 00:36:16.013 }' 00:36:16.013 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:16.273 BaseBdev2 00:36:16.273 BaseBdev3 00:36:16.273 BaseBdev4' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.273 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.532 [2024-11-26 17:33:16.967895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:16.532 [2024-11-26 17:33:16.967932] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:16.532 [2024-11-26 17:33:16.967991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:16.532 "name": "Existed_Raid", 00:36:16.532 "uuid": "f5edea9e-6170-4f12-a34f-40d9b2da34e5", 00:36:16.532 "strip_size_kb": 64, 00:36:16.532 "state": "offline", 00:36:16.532 "raid_level": "concat", 00:36:16.532 "superblock": false, 00:36:16.532 "num_base_bdevs": 4, 00:36:16.532 "num_base_bdevs_discovered": 3, 00:36:16.532 "num_base_bdevs_operational": 3, 00:36:16.532 "base_bdevs_list": [ 00:36:16.532 { 00:36:16.532 "name": null, 00:36:16.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.532 "is_configured": false, 00:36:16.532 "data_offset": 0, 00:36:16.532 "data_size": 65536 00:36:16.532 }, 00:36:16.532 { 00:36:16.532 "name": "BaseBdev2", 00:36:16.532 "uuid": "36b89356-adc9-48c1-8890-b269d1ca2798", 00:36:16.532 "is_configured": true, 00:36:16.532 "data_offset": 0, 00:36:16.532 "data_size": 65536 00:36:16.532 }, 00:36:16.532 { 00:36:16.532 "name": "BaseBdev3", 00:36:16.532 "uuid": "d70d8621-7f1b-49a9-8500-e7fe560d6715", 00:36:16.532 "is_configured": true, 00:36:16.532 "data_offset": 0, 00:36:16.532 "data_size": 65536 00:36:16.532 }, 00:36:16.532 { 00:36:16.532 "name": "BaseBdev4", 00:36:16.532 "uuid": "6995626f-39a7-4bc8-a8f5-8ffc6c40c0df", 00:36:16.532 "is_configured": true, 00:36:16.532 "data_offset": 0, 00:36:16.532 "data_size": 65536 00:36:16.532 } 00:36:16.532 ] 00:36:16.532 }' 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:16.532 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.099 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.099 [2024-11-26 17:33:17.604580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.100 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.100 [2024-11-26 17:33:17.764062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.358 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.358 [2024-11-26 17:33:17.926187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:36:17.358 [2024-11-26 17:33:17.926243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:17.358 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.358 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:17.358 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:17.358 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.358 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:17.358 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.358 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.358 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.618 BaseBdev2 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.618 [ 00:36:17.618 { 00:36:17.618 "name": "BaseBdev2", 00:36:17.618 "aliases": [ 00:36:17.618 "fcc36d82-a9dd-487d-aad1-8b907983c58d" 00:36:17.618 ], 00:36:17.618 "product_name": "Malloc disk", 00:36:17.618 "block_size": 512, 00:36:17.618 "num_blocks": 65536, 00:36:17.618 "uuid": "fcc36d82-a9dd-487d-aad1-8b907983c58d", 00:36:17.618 "assigned_rate_limits": { 00:36:17.618 "rw_ios_per_sec": 0, 00:36:17.618 "rw_mbytes_per_sec": 0, 00:36:17.618 "r_mbytes_per_sec": 0, 00:36:17.618 "w_mbytes_per_sec": 0 00:36:17.618 }, 00:36:17.618 "claimed": false, 00:36:17.618 "zoned": false, 00:36:17.618 "supported_io_types": { 00:36:17.618 "read": true, 00:36:17.618 "write": true, 00:36:17.618 "unmap": true, 00:36:17.618 "flush": true, 00:36:17.618 "reset": true, 00:36:17.618 "nvme_admin": false, 00:36:17.618 "nvme_io": false, 00:36:17.618 "nvme_io_md": false, 00:36:17.618 "write_zeroes": true, 00:36:17.618 "zcopy": true, 00:36:17.618 "get_zone_info": false, 00:36:17.618 "zone_management": false, 00:36:17.618 "zone_append": false, 00:36:17.618 "compare": false, 00:36:17.618 "compare_and_write": false, 00:36:17.618 "abort": true, 00:36:17.618 "seek_hole": false, 00:36:17.618 "seek_data": false, 00:36:17.618 "copy": true, 00:36:17.618 "nvme_iov_md": false 00:36:17.618 }, 00:36:17.618 "memory_domains": [ 00:36:17.618 { 00:36:17.618 "dma_device_id": "system", 00:36:17.618 "dma_device_type": 1 00:36:17.618 }, 00:36:17.618 { 00:36:17.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:17.618 "dma_device_type": 2 00:36:17.618 } 00:36:17.618 ], 00:36:17.618 "driver_specific": {} 00:36:17.618 } 00:36:17.618 ] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.618 BaseBdev3 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.618 [ 00:36:17.618 { 00:36:17.618 "name": "BaseBdev3", 00:36:17.618 "aliases": [ 00:36:17.618 "c99c7aaa-99b1-4860-83de-197e04ad237a" 00:36:17.618 ], 00:36:17.618 "product_name": "Malloc disk", 00:36:17.618 "block_size": 512, 00:36:17.618 "num_blocks": 65536, 00:36:17.618 "uuid": "c99c7aaa-99b1-4860-83de-197e04ad237a", 00:36:17.618 "assigned_rate_limits": { 00:36:17.618 "rw_ios_per_sec": 0, 00:36:17.618 "rw_mbytes_per_sec": 0, 00:36:17.618 "r_mbytes_per_sec": 0, 00:36:17.618 "w_mbytes_per_sec": 0 00:36:17.618 }, 00:36:17.618 "claimed": false, 00:36:17.618 "zoned": false, 00:36:17.618 "supported_io_types": { 00:36:17.618 "read": true, 00:36:17.618 "write": true, 00:36:17.618 "unmap": true, 00:36:17.618 "flush": true, 00:36:17.618 "reset": true, 00:36:17.618 "nvme_admin": false, 00:36:17.618 "nvme_io": false, 00:36:17.618 "nvme_io_md": false, 00:36:17.618 "write_zeroes": true, 00:36:17.618 "zcopy": true, 00:36:17.618 "get_zone_info": false, 00:36:17.618 "zone_management": false, 00:36:17.618 "zone_append": false, 00:36:17.618 "compare": false, 00:36:17.618 "compare_and_write": false, 00:36:17.618 "abort": true, 00:36:17.618 "seek_hole": false, 00:36:17.618 "seek_data": false, 00:36:17.618 "copy": true, 00:36:17.618 "nvme_iov_md": false 00:36:17.618 }, 00:36:17.618 "memory_domains": [ 00:36:17.618 { 00:36:17.618 "dma_device_id": "system", 00:36:17.618 "dma_device_type": 1 00:36:17.618 }, 00:36:17.618 { 00:36:17.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:17.618 "dma_device_type": 2 00:36:17.618 } 00:36:17.618 ], 00:36:17.618 "driver_specific": {} 00:36:17.618 } 00:36:17.618 ] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.618 BaseBdev4 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:17.618 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.877 [ 00:36:17.877 { 00:36:17.877 "name": "BaseBdev4", 00:36:17.877 "aliases": [ 00:36:17.877 "0cec955b-f497-4e3a-8b25-26fb6974d8f5" 00:36:17.877 ], 00:36:17.877 "product_name": "Malloc disk", 00:36:17.877 "block_size": 512, 00:36:17.877 "num_blocks": 65536, 00:36:17.877 "uuid": "0cec955b-f497-4e3a-8b25-26fb6974d8f5", 00:36:17.877 "assigned_rate_limits": { 00:36:17.877 "rw_ios_per_sec": 0, 00:36:17.877 "rw_mbytes_per_sec": 0, 00:36:17.877 "r_mbytes_per_sec": 0, 00:36:17.877 "w_mbytes_per_sec": 0 00:36:17.877 }, 00:36:17.877 "claimed": false, 00:36:17.877 "zoned": false, 00:36:17.877 "supported_io_types": { 00:36:17.877 "read": true, 00:36:17.877 "write": true, 00:36:17.877 "unmap": true, 00:36:17.877 "flush": true, 00:36:17.877 "reset": true, 00:36:17.877 "nvme_admin": false, 00:36:17.877 "nvme_io": false, 00:36:17.877 "nvme_io_md": false, 00:36:17.877 "write_zeroes": true, 00:36:17.877 "zcopy": true, 00:36:17.877 "get_zone_info": false, 00:36:17.877 "zone_management": false, 00:36:17.877 "zone_append": false, 00:36:17.877 "compare": false, 00:36:17.877 "compare_and_write": false, 00:36:17.877 "abort": true, 00:36:17.877 "seek_hole": false, 00:36:17.877 "seek_data": false, 00:36:17.877 "copy": true, 00:36:17.877 "nvme_iov_md": false 00:36:17.877 }, 00:36:17.877 "memory_domains": [ 00:36:17.877 { 00:36:17.877 "dma_device_id": "system", 00:36:17.877 "dma_device_type": 1 00:36:17.877 }, 00:36:17.877 { 00:36:17.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:17.877 "dma_device_type": 2 00:36:17.877 } 00:36:17.877 ], 00:36:17.877 "driver_specific": {} 00:36:17.877 } 00:36:17.877 ] 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.877 [2024-11-26 17:33:18.354119] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:17.877 [2024-11-26 17:33:18.354226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:17.877 [2024-11-26 17:33:18.354282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:17.877 [2024-11-26 17:33:18.356395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:17.877 [2024-11-26 17:33:18.356514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.877 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.878 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:17.878 "name": "Existed_Raid", 00:36:17.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:17.878 "strip_size_kb": 64, 00:36:17.878 "state": "configuring", 00:36:17.878 "raid_level": "concat", 00:36:17.878 "superblock": false, 00:36:17.878 "num_base_bdevs": 4, 00:36:17.878 "num_base_bdevs_discovered": 3, 00:36:17.878 "num_base_bdevs_operational": 4, 00:36:17.878 "base_bdevs_list": [ 00:36:17.878 { 00:36:17.878 "name": "BaseBdev1", 00:36:17.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:17.878 "is_configured": false, 00:36:17.878 "data_offset": 0, 00:36:17.878 "data_size": 0 00:36:17.878 }, 00:36:17.878 { 00:36:17.878 "name": "BaseBdev2", 00:36:17.878 "uuid": "fcc36d82-a9dd-487d-aad1-8b907983c58d", 00:36:17.878 "is_configured": true, 00:36:17.878 "data_offset": 0, 00:36:17.878 "data_size": 65536 00:36:17.878 }, 00:36:17.878 { 00:36:17.878 "name": "BaseBdev3", 00:36:17.878 "uuid": "c99c7aaa-99b1-4860-83de-197e04ad237a", 00:36:17.878 "is_configured": true, 00:36:17.878 "data_offset": 0, 00:36:17.878 "data_size": 65536 00:36:17.878 }, 00:36:17.878 { 00:36:17.878 "name": "BaseBdev4", 00:36:17.878 "uuid": "0cec955b-f497-4e3a-8b25-26fb6974d8f5", 00:36:17.878 "is_configured": true, 00:36:17.878 "data_offset": 0, 00:36:17.878 "data_size": 65536 00:36:17.878 } 00:36:17.878 ] 00:36:17.878 }' 00:36:17.878 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:17.878 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.137 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:18.137 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.137 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.137 [2024-11-26 17:33:18.825344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:18.137 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.137 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.396 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:18.396 "name": "Existed_Raid", 00:36:18.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:18.396 "strip_size_kb": 64, 00:36:18.396 "state": "configuring", 00:36:18.396 "raid_level": "concat", 00:36:18.396 "superblock": false, 00:36:18.396 "num_base_bdevs": 4, 00:36:18.396 "num_base_bdevs_discovered": 2, 00:36:18.396 "num_base_bdevs_operational": 4, 00:36:18.396 "base_bdevs_list": [ 00:36:18.396 { 00:36:18.396 "name": "BaseBdev1", 00:36:18.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:18.396 "is_configured": false, 00:36:18.396 "data_offset": 0, 00:36:18.396 "data_size": 0 00:36:18.396 }, 00:36:18.396 { 00:36:18.396 "name": null, 00:36:18.396 "uuid": "fcc36d82-a9dd-487d-aad1-8b907983c58d", 00:36:18.396 "is_configured": false, 00:36:18.396 "data_offset": 0, 00:36:18.396 "data_size": 65536 00:36:18.396 }, 00:36:18.396 { 00:36:18.396 "name": "BaseBdev3", 00:36:18.396 "uuid": "c99c7aaa-99b1-4860-83de-197e04ad237a", 00:36:18.397 "is_configured": true, 00:36:18.397 "data_offset": 0, 00:36:18.397 "data_size": 65536 00:36:18.397 }, 00:36:18.397 { 00:36:18.397 "name": "BaseBdev4", 00:36:18.397 "uuid": "0cec955b-f497-4e3a-8b25-26fb6974d8f5", 00:36:18.397 "is_configured": true, 00:36:18.397 "data_offset": 0, 00:36:18.397 "data_size": 65536 00:36:18.397 } 00:36:18.397 ] 00:36:18.397 }' 00:36:18.397 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:18.397 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.663 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:18.663 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.663 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.663 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.663 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.663 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:18.663 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:18.663 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.663 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.663 [2024-11-26 17:33:19.356389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:18.923 BaseBdev1 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.923 [ 00:36:18.923 { 00:36:18.923 "name": "BaseBdev1", 00:36:18.923 "aliases": [ 00:36:18.923 "a5980b75-dc06-454e-8eb8-ccf2956feb7a" 00:36:18.923 ], 00:36:18.923 "product_name": "Malloc disk", 00:36:18.923 "block_size": 512, 00:36:18.923 "num_blocks": 65536, 00:36:18.923 "uuid": "a5980b75-dc06-454e-8eb8-ccf2956feb7a", 00:36:18.923 "assigned_rate_limits": { 00:36:18.923 "rw_ios_per_sec": 0, 00:36:18.923 "rw_mbytes_per_sec": 0, 00:36:18.923 "r_mbytes_per_sec": 0, 00:36:18.923 "w_mbytes_per_sec": 0 00:36:18.923 }, 00:36:18.923 "claimed": true, 00:36:18.923 "claim_type": "exclusive_write", 00:36:18.923 "zoned": false, 00:36:18.923 "supported_io_types": { 00:36:18.923 "read": true, 00:36:18.923 "write": true, 00:36:18.923 "unmap": true, 00:36:18.923 "flush": true, 00:36:18.923 "reset": true, 00:36:18.923 "nvme_admin": false, 00:36:18.923 "nvme_io": false, 00:36:18.923 "nvme_io_md": false, 00:36:18.923 "write_zeroes": true, 00:36:18.923 "zcopy": true, 00:36:18.923 "get_zone_info": false, 00:36:18.923 "zone_management": false, 00:36:18.923 "zone_append": false, 00:36:18.923 "compare": false, 00:36:18.923 "compare_and_write": false, 00:36:18.923 "abort": true, 00:36:18.923 "seek_hole": false, 00:36:18.923 "seek_data": false, 00:36:18.923 "copy": true, 00:36:18.923 "nvme_iov_md": false 00:36:18.923 }, 00:36:18.923 "memory_domains": [ 00:36:18.923 { 00:36:18.923 "dma_device_id": "system", 00:36:18.923 "dma_device_type": 1 00:36:18.923 }, 00:36:18.923 { 00:36:18.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:18.923 "dma_device_type": 2 00:36:18.923 } 00:36:18.923 ], 00:36:18.923 "driver_specific": {} 00:36:18.923 } 00:36:18.923 ] 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:18.923 "name": "Existed_Raid", 00:36:18.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:18.923 "strip_size_kb": 64, 00:36:18.923 "state": "configuring", 00:36:18.923 "raid_level": "concat", 00:36:18.923 "superblock": false, 00:36:18.923 "num_base_bdevs": 4, 00:36:18.923 "num_base_bdevs_discovered": 3, 00:36:18.923 "num_base_bdevs_operational": 4, 00:36:18.923 "base_bdevs_list": [ 00:36:18.923 { 00:36:18.923 "name": "BaseBdev1", 00:36:18.923 "uuid": "a5980b75-dc06-454e-8eb8-ccf2956feb7a", 00:36:18.923 "is_configured": true, 00:36:18.923 "data_offset": 0, 00:36:18.923 "data_size": 65536 00:36:18.923 }, 00:36:18.923 { 00:36:18.923 "name": null, 00:36:18.923 "uuid": "fcc36d82-a9dd-487d-aad1-8b907983c58d", 00:36:18.923 "is_configured": false, 00:36:18.923 "data_offset": 0, 00:36:18.923 "data_size": 65536 00:36:18.923 }, 00:36:18.923 { 00:36:18.923 "name": "BaseBdev3", 00:36:18.923 "uuid": "c99c7aaa-99b1-4860-83de-197e04ad237a", 00:36:18.923 "is_configured": true, 00:36:18.923 "data_offset": 0, 00:36:18.923 "data_size": 65536 00:36:18.923 }, 00:36:18.923 { 00:36:18.923 "name": "BaseBdev4", 00:36:18.923 "uuid": "0cec955b-f497-4e3a-8b25-26fb6974d8f5", 00:36:18.923 "is_configured": true, 00:36:18.923 "data_offset": 0, 00:36:18.923 "data_size": 65536 00:36:18.923 } 00:36:18.923 ] 00:36:18.923 }' 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:18.923 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.182 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:19.182 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:19.182 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.182 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.182 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.442 [2024-11-26 17:33:19.903900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:19.442 "name": "Existed_Raid", 00:36:19.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.442 "strip_size_kb": 64, 00:36:19.442 "state": "configuring", 00:36:19.442 "raid_level": "concat", 00:36:19.442 "superblock": false, 00:36:19.442 "num_base_bdevs": 4, 00:36:19.442 "num_base_bdevs_discovered": 2, 00:36:19.442 "num_base_bdevs_operational": 4, 00:36:19.442 "base_bdevs_list": [ 00:36:19.442 { 00:36:19.442 "name": "BaseBdev1", 00:36:19.442 "uuid": "a5980b75-dc06-454e-8eb8-ccf2956feb7a", 00:36:19.442 "is_configured": true, 00:36:19.442 "data_offset": 0, 00:36:19.442 "data_size": 65536 00:36:19.442 }, 00:36:19.442 { 00:36:19.442 "name": null, 00:36:19.442 "uuid": "fcc36d82-a9dd-487d-aad1-8b907983c58d", 00:36:19.442 "is_configured": false, 00:36:19.442 "data_offset": 0, 00:36:19.442 "data_size": 65536 00:36:19.442 }, 00:36:19.442 { 00:36:19.442 "name": null, 00:36:19.442 "uuid": "c99c7aaa-99b1-4860-83de-197e04ad237a", 00:36:19.442 "is_configured": false, 00:36:19.442 "data_offset": 0, 00:36:19.442 "data_size": 65536 00:36:19.442 }, 00:36:19.442 { 00:36:19.442 "name": "BaseBdev4", 00:36:19.442 "uuid": "0cec955b-f497-4e3a-8b25-26fb6974d8f5", 00:36:19.442 "is_configured": true, 00:36:19.442 "data_offset": 0, 00:36:19.442 "data_size": 65536 00:36:19.442 } 00:36:19.442 ] 00:36:19.442 }' 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:19.442 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.701 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:19.701 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:19.701 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.701 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.961 [2024-11-26 17:33:20.431063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:19.961 "name": "Existed_Raid", 00:36:19.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.961 "strip_size_kb": 64, 00:36:19.961 "state": "configuring", 00:36:19.961 "raid_level": "concat", 00:36:19.961 "superblock": false, 00:36:19.961 "num_base_bdevs": 4, 00:36:19.961 "num_base_bdevs_discovered": 3, 00:36:19.961 "num_base_bdevs_operational": 4, 00:36:19.961 "base_bdevs_list": [ 00:36:19.961 { 00:36:19.961 "name": "BaseBdev1", 00:36:19.961 "uuid": "a5980b75-dc06-454e-8eb8-ccf2956feb7a", 00:36:19.961 "is_configured": true, 00:36:19.961 "data_offset": 0, 00:36:19.961 "data_size": 65536 00:36:19.961 }, 00:36:19.961 { 00:36:19.961 "name": null, 00:36:19.961 "uuid": "fcc36d82-a9dd-487d-aad1-8b907983c58d", 00:36:19.961 "is_configured": false, 00:36:19.961 "data_offset": 0, 00:36:19.961 "data_size": 65536 00:36:19.961 }, 00:36:19.961 { 00:36:19.961 "name": "BaseBdev3", 00:36:19.961 "uuid": "c99c7aaa-99b1-4860-83de-197e04ad237a", 00:36:19.961 "is_configured": true, 00:36:19.961 "data_offset": 0, 00:36:19.961 "data_size": 65536 00:36:19.961 }, 00:36:19.961 { 00:36:19.961 "name": "BaseBdev4", 00:36:19.961 "uuid": "0cec955b-f497-4e3a-8b25-26fb6974d8f5", 00:36:19.961 "is_configured": true, 00:36:19.961 "data_offset": 0, 00:36:19.961 "data_size": 65536 00:36:19.961 } 00:36:19.961 ] 00:36:19.961 }' 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:19.961 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.220 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:20.220 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.220 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.220 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:20.220 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.479 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:20.479 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:20.479 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.479 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.479 [2024-11-26 17:33:20.938355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:20.479 "name": "Existed_Raid", 00:36:20.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:20.479 "strip_size_kb": 64, 00:36:20.479 "state": "configuring", 00:36:20.479 "raid_level": "concat", 00:36:20.479 "superblock": false, 00:36:20.479 "num_base_bdevs": 4, 00:36:20.479 "num_base_bdevs_discovered": 2, 00:36:20.479 "num_base_bdevs_operational": 4, 00:36:20.479 "base_bdevs_list": [ 00:36:20.479 { 00:36:20.479 "name": null, 00:36:20.479 "uuid": "a5980b75-dc06-454e-8eb8-ccf2956feb7a", 00:36:20.479 "is_configured": false, 00:36:20.479 "data_offset": 0, 00:36:20.479 "data_size": 65536 00:36:20.479 }, 00:36:20.479 { 00:36:20.479 "name": null, 00:36:20.479 "uuid": "fcc36d82-a9dd-487d-aad1-8b907983c58d", 00:36:20.479 "is_configured": false, 00:36:20.479 "data_offset": 0, 00:36:20.479 "data_size": 65536 00:36:20.479 }, 00:36:20.479 { 00:36:20.479 "name": "BaseBdev3", 00:36:20.479 "uuid": "c99c7aaa-99b1-4860-83de-197e04ad237a", 00:36:20.479 "is_configured": true, 00:36:20.479 "data_offset": 0, 00:36:20.479 "data_size": 65536 00:36:20.479 }, 00:36:20.479 { 00:36:20.479 "name": "BaseBdev4", 00:36:20.479 "uuid": "0cec955b-f497-4e3a-8b25-26fb6974d8f5", 00:36:20.479 "is_configured": true, 00:36:20.479 "data_offset": 0, 00:36:20.479 "data_size": 65536 00:36:20.479 } 00:36:20.479 ] 00:36:20.479 }' 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:20.479 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.045 [2024-11-26 17:33:21.564738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.045 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:21.045 "name": "Existed_Raid", 00:36:21.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:21.045 "strip_size_kb": 64, 00:36:21.045 "state": "configuring", 00:36:21.045 "raid_level": "concat", 00:36:21.045 "superblock": false, 00:36:21.045 "num_base_bdevs": 4, 00:36:21.045 "num_base_bdevs_discovered": 3, 00:36:21.045 "num_base_bdevs_operational": 4, 00:36:21.045 "base_bdevs_list": [ 00:36:21.045 { 00:36:21.045 "name": null, 00:36:21.045 "uuid": "a5980b75-dc06-454e-8eb8-ccf2956feb7a", 00:36:21.045 "is_configured": false, 00:36:21.045 "data_offset": 0, 00:36:21.045 "data_size": 65536 00:36:21.045 }, 00:36:21.045 { 00:36:21.045 "name": "BaseBdev2", 00:36:21.045 "uuid": "fcc36d82-a9dd-487d-aad1-8b907983c58d", 00:36:21.045 "is_configured": true, 00:36:21.045 "data_offset": 0, 00:36:21.045 "data_size": 65536 00:36:21.045 }, 00:36:21.045 { 00:36:21.045 "name": "BaseBdev3", 00:36:21.045 "uuid": "c99c7aaa-99b1-4860-83de-197e04ad237a", 00:36:21.046 "is_configured": true, 00:36:21.046 "data_offset": 0, 00:36:21.046 "data_size": 65536 00:36:21.046 }, 00:36:21.046 { 00:36:21.046 "name": "BaseBdev4", 00:36:21.046 "uuid": "0cec955b-f497-4e3a-8b25-26fb6974d8f5", 00:36:21.046 "is_configured": true, 00:36:21.046 "data_offset": 0, 00:36:21.046 "data_size": 65536 00:36:21.046 } 00:36:21.046 ] 00:36:21.046 }' 00:36:21.046 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:21.046 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a5980b75-dc06-454e-8eb8-ccf2956feb7a 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.614 [2024-11-26 17:33:22.159688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:21.614 [2024-11-26 17:33:22.159854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:21.614 [2024-11-26 17:33:22.159871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:36:21.614 [2024-11-26 17:33:22.160226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:36:21.614 [2024-11-26 17:33:22.160397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:21.614 [2024-11-26 17:33:22.160413] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:36:21.614 [2024-11-26 17:33:22.160754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:21.614 NewBaseBdev 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.614 [ 00:36:21.614 { 00:36:21.614 "name": "NewBaseBdev", 00:36:21.614 "aliases": [ 00:36:21.614 "a5980b75-dc06-454e-8eb8-ccf2956feb7a" 00:36:21.614 ], 00:36:21.614 "product_name": "Malloc disk", 00:36:21.614 "block_size": 512, 00:36:21.614 "num_blocks": 65536, 00:36:21.614 "uuid": "a5980b75-dc06-454e-8eb8-ccf2956feb7a", 00:36:21.614 "assigned_rate_limits": { 00:36:21.614 "rw_ios_per_sec": 0, 00:36:21.614 "rw_mbytes_per_sec": 0, 00:36:21.614 "r_mbytes_per_sec": 0, 00:36:21.614 "w_mbytes_per_sec": 0 00:36:21.614 }, 00:36:21.614 "claimed": true, 00:36:21.614 "claim_type": "exclusive_write", 00:36:21.614 "zoned": false, 00:36:21.614 "supported_io_types": { 00:36:21.614 "read": true, 00:36:21.614 "write": true, 00:36:21.614 "unmap": true, 00:36:21.614 "flush": true, 00:36:21.614 "reset": true, 00:36:21.614 "nvme_admin": false, 00:36:21.614 "nvme_io": false, 00:36:21.614 "nvme_io_md": false, 00:36:21.614 "write_zeroes": true, 00:36:21.614 "zcopy": true, 00:36:21.614 "get_zone_info": false, 00:36:21.614 "zone_management": false, 00:36:21.614 "zone_append": false, 00:36:21.614 "compare": false, 00:36:21.614 "compare_and_write": false, 00:36:21.614 "abort": true, 00:36:21.614 "seek_hole": false, 00:36:21.614 "seek_data": false, 00:36:21.614 "copy": true, 00:36:21.614 "nvme_iov_md": false 00:36:21.614 }, 00:36:21.614 "memory_domains": [ 00:36:21.614 { 00:36:21.614 "dma_device_id": "system", 00:36:21.614 "dma_device_type": 1 00:36:21.614 }, 00:36:21.614 { 00:36:21.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:21.614 "dma_device_type": 2 00:36:21.614 } 00:36:21.614 ], 00:36:21.614 "driver_specific": {} 00:36:21.614 } 00:36:21.614 ] 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.614 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:21.614 "name": "Existed_Raid", 00:36:21.614 "uuid": "5ea2daa1-497d-4058-b881-7714eb1303c8", 00:36:21.614 "strip_size_kb": 64, 00:36:21.614 "state": "online", 00:36:21.614 "raid_level": "concat", 00:36:21.614 "superblock": false, 00:36:21.614 "num_base_bdevs": 4, 00:36:21.614 "num_base_bdevs_discovered": 4, 00:36:21.614 "num_base_bdevs_operational": 4, 00:36:21.614 "base_bdevs_list": [ 00:36:21.614 { 00:36:21.614 "name": "NewBaseBdev", 00:36:21.614 "uuid": "a5980b75-dc06-454e-8eb8-ccf2956feb7a", 00:36:21.614 "is_configured": true, 00:36:21.614 "data_offset": 0, 00:36:21.614 "data_size": 65536 00:36:21.614 }, 00:36:21.614 { 00:36:21.614 "name": "BaseBdev2", 00:36:21.614 "uuid": "fcc36d82-a9dd-487d-aad1-8b907983c58d", 00:36:21.614 "is_configured": true, 00:36:21.614 "data_offset": 0, 00:36:21.614 "data_size": 65536 00:36:21.614 }, 00:36:21.614 { 00:36:21.614 "name": "BaseBdev3", 00:36:21.614 "uuid": "c99c7aaa-99b1-4860-83de-197e04ad237a", 00:36:21.614 "is_configured": true, 00:36:21.614 "data_offset": 0, 00:36:21.614 "data_size": 65536 00:36:21.614 }, 00:36:21.614 { 00:36:21.615 "name": "BaseBdev4", 00:36:21.615 "uuid": "0cec955b-f497-4e3a-8b25-26fb6974d8f5", 00:36:21.615 "is_configured": true, 00:36:21.615 "data_offset": 0, 00:36:21.615 "data_size": 65536 00:36:21.615 } 00:36:21.615 ] 00:36:21.615 }' 00:36:21.615 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:21.615 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.214 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:22.214 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:22.214 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:22.214 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:22.214 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:22.214 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:22.214 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:22.214 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.215 [2024-11-26 17:33:22.679400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:22.215 "name": "Existed_Raid", 00:36:22.215 "aliases": [ 00:36:22.215 "5ea2daa1-497d-4058-b881-7714eb1303c8" 00:36:22.215 ], 00:36:22.215 "product_name": "Raid Volume", 00:36:22.215 "block_size": 512, 00:36:22.215 "num_blocks": 262144, 00:36:22.215 "uuid": "5ea2daa1-497d-4058-b881-7714eb1303c8", 00:36:22.215 "assigned_rate_limits": { 00:36:22.215 "rw_ios_per_sec": 0, 00:36:22.215 "rw_mbytes_per_sec": 0, 00:36:22.215 "r_mbytes_per_sec": 0, 00:36:22.215 "w_mbytes_per_sec": 0 00:36:22.215 }, 00:36:22.215 "claimed": false, 00:36:22.215 "zoned": false, 00:36:22.215 "supported_io_types": { 00:36:22.215 "read": true, 00:36:22.215 "write": true, 00:36:22.215 "unmap": true, 00:36:22.215 "flush": true, 00:36:22.215 "reset": true, 00:36:22.215 "nvme_admin": false, 00:36:22.215 "nvme_io": false, 00:36:22.215 "nvme_io_md": false, 00:36:22.215 "write_zeroes": true, 00:36:22.215 "zcopy": false, 00:36:22.215 "get_zone_info": false, 00:36:22.215 "zone_management": false, 00:36:22.215 "zone_append": false, 00:36:22.215 "compare": false, 00:36:22.215 "compare_and_write": false, 00:36:22.215 "abort": false, 00:36:22.215 "seek_hole": false, 00:36:22.215 "seek_data": false, 00:36:22.215 "copy": false, 00:36:22.215 "nvme_iov_md": false 00:36:22.215 }, 00:36:22.215 "memory_domains": [ 00:36:22.215 { 00:36:22.215 "dma_device_id": "system", 00:36:22.215 "dma_device_type": 1 00:36:22.215 }, 00:36:22.215 { 00:36:22.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.215 "dma_device_type": 2 00:36:22.215 }, 00:36:22.215 { 00:36:22.215 "dma_device_id": "system", 00:36:22.215 "dma_device_type": 1 00:36:22.215 }, 00:36:22.215 { 00:36:22.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.215 "dma_device_type": 2 00:36:22.215 }, 00:36:22.215 { 00:36:22.215 "dma_device_id": "system", 00:36:22.215 "dma_device_type": 1 00:36:22.215 }, 00:36:22.215 { 00:36:22.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.215 "dma_device_type": 2 00:36:22.215 }, 00:36:22.215 { 00:36:22.215 "dma_device_id": "system", 00:36:22.215 "dma_device_type": 1 00:36:22.215 }, 00:36:22.215 { 00:36:22.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.215 "dma_device_type": 2 00:36:22.215 } 00:36:22.215 ], 00:36:22.215 "driver_specific": { 00:36:22.215 "raid": { 00:36:22.215 "uuid": "5ea2daa1-497d-4058-b881-7714eb1303c8", 00:36:22.215 "strip_size_kb": 64, 00:36:22.215 "state": "online", 00:36:22.215 "raid_level": "concat", 00:36:22.215 "superblock": false, 00:36:22.215 "num_base_bdevs": 4, 00:36:22.215 "num_base_bdevs_discovered": 4, 00:36:22.215 "num_base_bdevs_operational": 4, 00:36:22.215 "base_bdevs_list": [ 00:36:22.215 { 00:36:22.215 "name": "NewBaseBdev", 00:36:22.215 "uuid": "a5980b75-dc06-454e-8eb8-ccf2956feb7a", 00:36:22.215 "is_configured": true, 00:36:22.215 "data_offset": 0, 00:36:22.215 "data_size": 65536 00:36:22.215 }, 00:36:22.215 { 00:36:22.215 "name": "BaseBdev2", 00:36:22.215 "uuid": "fcc36d82-a9dd-487d-aad1-8b907983c58d", 00:36:22.215 "is_configured": true, 00:36:22.215 "data_offset": 0, 00:36:22.215 "data_size": 65536 00:36:22.215 }, 00:36:22.215 { 00:36:22.215 "name": "BaseBdev3", 00:36:22.215 "uuid": "c99c7aaa-99b1-4860-83de-197e04ad237a", 00:36:22.215 "is_configured": true, 00:36:22.215 "data_offset": 0, 00:36:22.215 "data_size": 65536 00:36:22.215 }, 00:36:22.215 { 00:36:22.215 "name": "BaseBdev4", 00:36:22.215 "uuid": "0cec955b-f497-4e3a-8b25-26fb6974d8f5", 00:36:22.215 "is_configured": true, 00:36:22.215 "data_offset": 0, 00:36:22.215 "data_size": 65536 00:36:22.215 } 00:36:22.215 ] 00:36:22.215 } 00:36:22.215 } 00:36:22.215 }' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:22.215 BaseBdev2 00:36:22.215 BaseBdev3 00:36:22.215 BaseBdev4' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.215 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.216 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.474 [2024-11-26 17:33:22.942447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:22.474 [2024-11-26 17:33:22.942560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:22.474 [2024-11-26 17:33:22.942655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:22.474 [2024-11-26 17:33:22.942735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:22.474 [2024-11-26 17:33:22.942746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71554 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71554 ']' 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71554 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71554 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71554' 00:36:22.474 killing process with pid 71554 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71554 00:36:22.474 [2024-11-26 17:33:22.975776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:22.474 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71554 00:36:22.732 [2024-11-26 17:33:23.410433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:24.111 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:36:24.111 00:36:24.111 real 0m12.283s 00:36:24.111 user 0m19.362s 00:36:24.111 sys 0m2.164s 00:36:24.111 17:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.111 17:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.111 ************************************ 00:36:24.111 END TEST raid_state_function_test 00:36:24.111 ************************************ 00:36:24.111 17:33:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:36:24.111 17:33:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:24.111 17:33:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.111 17:33:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:24.370 ************************************ 00:36:24.370 START TEST raid_state_function_test_sb 00:36:24.370 ************************************ 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72231 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72231' 00:36:24.370 Process raid pid: 72231 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72231 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72231 ']' 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.370 17:33:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:24.370 [2024-11-26 17:33:24.914311] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:24.370 [2024-11-26 17:33:24.914493] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.629 [2024-11-26 17:33:25.078624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.629 [2024-11-26 17:33:25.208446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.888 [2024-11-26 17:33:25.436106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:24.888 [2024-11-26 17:33:25.436253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.148 [2024-11-26 17:33:25.791730] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:25.148 [2024-11-26 17:33:25.791781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:25.148 [2024-11-26 17:33:25.791799] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:25.148 [2024-11-26 17:33:25.791810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:25.148 [2024-11-26 17:33:25.791818] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:25.148 [2024-11-26 17:33:25.791828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:25.148 [2024-11-26 17:33:25.791835] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:25.148 [2024-11-26 17:33:25.791845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:25.148 17:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.407 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:25.407 "name": "Existed_Raid", 00:36:25.407 "uuid": "cc239a57-8fa0-475d-abc8-b7ebb006221a", 00:36:25.407 "strip_size_kb": 64, 00:36:25.407 "state": "configuring", 00:36:25.407 "raid_level": "concat", 00:36:25.407 "superblock": true, 00:36:25.407 "num_base_bdevs": 4, 00:36:25.407 "num_base_bdevs_discovered": 0, 00:36:25.407 "num_base_bdevs_operational": 4, 00:36:25.407 "base_bdevs_list": [ 00:36:25.407 { 00:36:25.407 "name": "BaseBdev1", 00:36:25.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.407 "is_configured": false, 00:36:25.407 "data_offset": 0, 00:36:25.407 "data_size": 0 00:36:25.407 }, 00:36:25.407 { 00:36:25.407 "name": "BaseBdev2", 00:36:25.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.407 "is_configured": false, 00:36:25.407 "data_offset": 0, 00:36:25.407 "data_size": 0 00:36:25.407 }, 00:36:25.407 { 00:36:25.407 "name": "BaseBdev3", 00:36:25.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.407 "is_configured": false, 00:36:25.407 "data_offset": 0, 00:36:25.407 "data_size": 0 00:36:25.407 }, 00:36:25.407 { 00:36:25.407 "name": "BaseBdev4", 00:36:25.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.407 "is_configured": false, 00:36:25.407 "data_offset": 0, 00:36:25.407 "data_size": 0 00:36:25.407 } 00:36:25.407 ] 00:36:25.407 }' 00:36:25.407 17:33:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:25.407 17:33:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.700 [2024-11-26 17:33:26.222964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:25.700 [2024-11-26 17:33:26.223072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.700 [2024-11-26 17:33:26.234952] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:25.700 [2024-11-26 17:33:26.235045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:25.700 [2024-11-26 17:33:26.235082] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:25.700 [2024-11-26 17:33:26.235122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:25.700 [2024-11-26 17:33:26.235160] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:25.700 [2024-11-26 17:33:26.235185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:25.700 [2024-11-26 17:33:26.235206] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:25.700 [2024-11-26 17:33:26.235286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.700 [2024-11-26 17:33:26.288904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:25.700 BaseBdev1 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.700 [ 00:36:25.700 { 00:36:25.700 "name": "BaseBdev1", 00:36:25.700 "aliases": [ 00:36:25.700 "8b880740-590b-4ac8-98c1-5bdfc5426845" 00:36:25.700 ], 00:36:25.700 "product_name": "Malloc disk", 00:36:25.700 "block_size": 512, 00:36:25.700 "num_blocks": 65536, 00:36:25.700 "uuid": "8b880740-590b-4ac8-98c1-5bdfc5426845", 00:36:25.700 "assigned_rate_limits": { 00:36:25.700 "rw_ios_per_sec": 0, 00:36:25.700 "rw_mbytes_per_sec": 0, 00:36:25.700 "r_mbytes_per_sec": 0, 00:36:25.700 "w_mbytes_per_sec": 0 00:36:25.700 }, 00:36:25.700 "claimed": true, 00:36:25.700 "claim_type": "exclusive_write", 00:36:25.700 "zoned": false, 00:36:25.700 "supported_io_types": { 00:36:25.700 "read": true, 00:36:25.700 "write": true, 00:36:25.700 "unmap": true, 00:36:25.700 "flush": true, 00:36:25.700 "reset": true, 00:36:25.700 "nvme_admin": false, 00:36:25.700 "nvme_io": false, 00:36:25.700 "nvme_io_md": false, 00:36:25.700 "write_zeroes": true, 00:36:25.700 "zcopy": true, 00:36:25.700 "get_zone_info": false, 00:36:25.700 "zone_management": false, 00:36:25.700 "zone_append": false, 00:36:25.700 "compare": false, 00:36:25.700 "compare_and_write": false, 00:36:25.700 "abort": true, 00:36:25.700 "seek_hole": false, 00:36:25.700 "seek_data": false, 00:36:25.700 "copy": true, 00:36:25.700 "nvme_iov_md": false 00:36:25.700 }, 00:36:25.700 "memory_domains": [ 00:36:25.700 { 00:36:25.700 "dma_device_id": "system", 00:36:25.700 "dma_device_type": 1 00:36:25.700 }, 00:36:25.700 { 00:36:25.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:25.700 "dma_device_type": 2 00:36:25.700 } 00:36:25.700 ], 00:36:25.700 "driver_specific": {} 00:36:25.700 } 00:36:25.700 ] 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:25.700 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.003 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:26.003 "name": "Existed_Raid", 00:36:26.003 "uuid": "a43de647-7cda-4192-a45b-e4049f4c576c", 00:36:26.003 "strip_size_kb": 64, 00:36:26.003 "state": "configuring", 00:36:26.003 "raid_level": "concat", 00:36:26.003 "superblock": true, 00:36:26.003 "num_base_bdevs": 4, 00:36:26.003 "num_base_bdevs_discovered": 1, 00:36:26.003 "num_base_bdevs_operational": 4, 00:36:26.003 "base_bdevs_list": [ 00:36:26.003 { 00:36:26.003 "name": "BaseBdev1", 00:36:26.003 "uuid": "8b880740-590b-4ac8-98c1-5bdfc5426845", 00:36:26.003 "is_configured": true, 00:36:26.003 "data_offset": 2048, 00:36:26.003 "data_size": 63488 00:36:26.003 }, 00:36:26.003 { 00:36:26.003 "name": "BaseBdev2", 00:36:26.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.003 "is_configured": false, 00:36:26.003 "data_offset": 0, 00:36:26.003 "data_size": 0 00:36:26.003 }, 00:36:26.003 { 00:36:26.003 "name": "BaseBdev3", 00:36:26.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.003 "is_configured": false, 00:36:26.003 "data_offset": 0, 00:36:26.003 "data_size": 0 00:36:26.003 }, 00:36:26.003 { 00:36:26.003 "name": "BaseBdev4", 00:36:26.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.003 "is_configured": false, 00:36:26.003 "data_offset": 0, 00:36:26.003 "data_size": 0 00:36:26.003 } 00:36:26.003 ] 00:36:26.003 }' 00:36:26.003 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:26.003 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.262 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:26.262 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.263 [2024-11-26 17:33:26.768197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:26.263 [2024-11-26 17:33:26.768333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.263 [2024-11-26 17:33:26.780282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:26.263 [2024-11-26 17:33:26.782477] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:26.263 [2024-11-26 17:33:26.782601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:26.263 [2024-11-26 17:33:26.782644] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:26.263 [2024-11-26 17:33:26.782679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:26.263 [2024-11-26 17:33:26.782706] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:26.263 [2024-11-26 17:33:26.782735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:26.263 "name": "Existed_Raid", 00:36:26.263 "uuid": "9b1e1aa7-b1a9-4c56-800c-8d988a746261", 00:36:26.263 "strip_size_kb": 64, 00:36:26.263 "state": "configuring", 00:36:26.263 "raid_level": "concat", 00:36:26.263 "superblock": true, 00:36:26.263 "num_base_bdevs": 4, 00:36:26.263 "num_base_bdevs_discovered": 1, 00:36:26.263 "num_base_bdevs_operational": 4, 00:36:26.263 "base_bdevs_list": [ 00:36:26.263 { 00:36:26.263 "name": "BaseBdev1", 00:36:26.263 "uuid": "8b880740-590b-4ac8-98c1-5bdfc5426845", 00:36:26.263 "is_configured": true, 00:36:26.263 "data_offset": 2048, 00:36:26.263 "data_size": 63488 00:36:26.263 }, 00:36:26.263 { 00:36:26.263 "name": "BaseBdev2", 00:36:26.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.263 "is_configured": false, 00:36:26.263 "data_offset": 0, 00:36:26.263 "data_size": 0 00:36:26.263 }, 00:36:26.263 { 00:36:26.263 "name": "BaseBdev3", 00:36:26.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.263 "is_configured": false, 00:36:26.263 "data_offset": 0, 00:36:26.263 "data_size": 0 00:36:26.263 }, 00:36:26.263 { 00:36:26.263 "name": "BaseBdev4", 00:36:26.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.263 "is_configured": false, 00:36:26.263 "data_offset": 0, 00:36:26.263 "data_size": 0 00:36:26.263 } 00:36:26.263 ] 00:36:26.263 }' 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:26.263 17:33:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.832 [2024-11-26 17:33:27.284290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:26.832 BaseBdev2 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.832 [ 00:36:26.832 { 00:36:26.832 "name": "BaseBdev2", 00:36:26.832 "aliases": [ 00:36:26.832 "36464149-4f1a-43f7-a418-b0f8b5d473ec" 00:36:26.832 ], 00:36:26.832 "product_name": "Malloc disk", 00:36:26.832 "block_size": 512, 00:36:26.832 "num_blocks": 65536, 00:36:26.832 "uuid": "36464149-4f1a-43f7-a418-b0f8b5d473ec", 00:36:26.832 "assigned_rate_limits": { 00:36:26.832 "rw_ios_per_sec": 0, 00:36:26.832 "rw_mbytes_per_sec": 0, 00:36:26.832 "r_mbytes_per_sec": 0, 00:36:26.832 "w_mbytes_per_sec": 0 00:36:26.832 }, 00:36:26.832 "claimed": true, 00:36:26.832 "claim_type": "exclusive_write", 00:36:26.832 "zoned": false, 00:36:26.832 "supported_io_types": { 00:36:26.832 "read": true, 00:36:26.832 "write": true, 00:36:26.832 "unmap": true, 00:36:26.832 "flush": true, 00:36:26.832 "reset": true, 00:36:26.832 "nvme_admin": false, 00:36:26.832 "nvme_io": false, 00:36:26.832 "nvme_io_md": false, 00:36:26.832 "write_zeroes": true, 00:36:26.832 "zcopy": true, 00:36:26.832 "get_zone_info": false, 00:36:26.832 "zone_management": false, 00:36:26.832 "zone_append": false, 00:36:26.832 "compare": false, 00:36:26.832 "compare_and_write": false, 00:36:26.832 "abort": true, 00:36:26.832 "seek_hole": false, 00:36:26.832 "seek_data": false, 00:36:26.832 "copy": true, 00:36:26.832 "nvme_iov_md": false 00:36:26.832 }, 00:36:26.832 "memory_domains": [ 00:36:26.832 { 00:36:26.832 "dma_device_id": "system", 00:36:26.832 "dma_device_type": 1 00:36:26.832 }, 00:36:26.832 { 00:36:26.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:26.832 "dma_device_type": 2 00:36:26.832 } 00:36:26.832 ], 00:36:26.832 "driver_specific": {} 00:36:26.832 } 00:36:26.832 ] 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.832 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:26.832 "name": "Existed_Raid", 00:36:26.833 "uuid": "9b1e1aa7-b1a9-4c56-800c-8d988a746261", 00:36:26.833 "strip_size_kb": 64, 00:36:26.833 "state": "configuring", 00:36:26.833 "raid_level": "concat", 00:36:26.833 "superblock": true, 00:36:26.833 "num_base_bdevs": 4, 00:36:26.833 "num_base_bdevs_discovered": 2, 00:36:26.833 "num_base_bdevs_operational": 4, 00:36:26.833 "base_bdevs_list": [ 00:36:26.833 { 00:36:26.833 "name": "BaseBdev1", 00:36:26.833 "uuid": "8b880740-590b-4ac8-98c1-5bdfc5426845", 00:36:26.833 "is_configured": true, 00:36:26.833 "data_offset": 2048, 00:36:26.833 "data_size": 63488 00:36:26.833 }, 00:36:26.833 { 00:36:26.833 "name": "BaseBdev2", 00:36:26.833 "uuid": "36464149-4f1a-43f7-a418-b0f8b5d473ec", 00:36:26.833 "is_configured": true, 00:36:26.833 "data_offset": 2048, 00:36:26.833 "data_size": 63488 00:36:26.833 }, 00:36:26.833 { 00:36:26.833 "name": "BaseBdev3", 00:36:26.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.833 "is_configured": false, 00:36:26.833 "data_offset": 0, 00:36:26.833 "data_size": 0 00:36:26.833 }, 00:36:26.833 { 00:36:26.833 "name": "BaseBdev4", 00:36:26.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:26.833 "is_configured": false, 00:36:26.833 "data_offset": 0, 00:36:26.833 "data_size": 0 00:36:26.833 } 00:36:26.833 ] 00:36:26.833 }' 00:36:26.833 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:26.833 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.092 [2024-11-26 17:33:27.754419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:27.092 BaseBdev3 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.092 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.092 [ 00:36:27.092 { 00:36:27.092 "name": "BaseBdev3", 00:36:27.092 "aliases": [ 00:36:27.092 "a5fa246d-a21d-4718-a1e7-97a565973698" 00:36:27.092 ], 00:36:27.092 "product_name": "Malloc disk", 00:36:27.092 "block_size": 512, 00:36:27.092 "num_blocks": 65536, 00:36:27.092 "uuid": "a5fa246d-a21d-4718-a1e7-97a565973698", 00:36:27.092 "assigned_rate_limits": { 00:36:27.092 "rw_ios_per_sec": 0, 00:36:27.092 "rw_mbytes_per_sec": 0, 00:36:27.092 "r_mbytes_per_sec": 0, 00:36:27.092 "w_mbytes_per_sec": 0 00:36:27.092 }, 00:36:27.092 "claimed": true, 00:36:27.092 "claim_type": "exclusive_write", 00:36:27.092 "zoned": false, 00:36:27.092 "supported_io_types": { 00:36:27.092 "read": true, 00:36:27.092 "write": true, 00:36:27.092 "unmap": true, 00:36:27.092 "flush": true, 00:36:27.092 "reset": true, 00:36:27.092 "nvme_admin": false, 00:36:27.092 "nvme_io": false, 00:36:27.092 "nvme_io_md": false, 00:36:27.092 "write_zeroes": true, 00:36:27.092 "zcopy": true, 00:36:27.092 "get_zone_info": false, 00:36:27.092 "zone_management": false, 00:36:27.092 "zone_append": false, 00:36:27.092 "compare": false, 00:36:27.092 "compare_and_write": false, 00:36:27.092 "abort": true, 00:36:27.092 "seek_hole": false, 00:36:27.092 "seek_data": false, 00:36:27.092 "copy": true, 00:36:27.092 "nvme_iov_md": false 00:36:27.092 }, 00:36:27.092 "memory_domains": [ 00:36:27.092 { 00:36:27.092 "dma_device_id": "system", 00:36:27.092 "dma_device_type": 1 00:36:27.092 }, 00:36:27.092 { 00:36:27.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:27.093 "dma_device_type": 2 00:36:27.093 } 00:36:27.093 ], 00:36:27.352 "driver_specific": {} 00:36:27.352 } 00:36:27.352 ] 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:27.352 "name": "Existed_Raid", 00:36:27.352 "uuid": "9b1e1aa7-b1a9-4c56-800c-8d988a746261", 00:36:27.352 "strip_size_kb": 64, 00:36:27.352 "state": "configuring", 00:36:27.352 "raid_level": "concat", 00:36:27.352 "superblock": true, 00:36:27.352 "num_base_bdevs": 4, 00:36:27.352 "num_base_bdevs_discovered": 3, 00:36:27.352 "num_base_bdevs_operational": 4, 00:36:27.352 "base_bdevs_list": [ 00:36:27.352 { 00:36:27.352 "name": "BaseBdev1", 00:36:27.352 "uuid": "8b880740-590b-4ac8-98c1-5bdfc5426845", 00:36:27.352 "is_configured": true, 00:36:27.352 "data_offset": 2048, 00:36:27.352 "data_size": 63488 00:36:27.352 }, 00:36:27.352 { 00:36:27.352 "name": "BaseBdev2", 00:36:27.352 "uuid": "36464149-4f1a-43f7-a418-b0f8b5d473ec", 00:36:27.352 "is_configured": true, 00:36:27.352 "data_offset": 2048, 00:36:27.352 "data_size": 63488 00:36:27.352 }, 00:36:27.352 { 00:36:27.352 "name": "BaseBdev3", 00:36:27.352 "uuid": "a5fa246d-a21d-4718-a1e7-97a565973698", 00:36:27.352 "is_configured": true, 00:36:27.352 "data_offset": 2048, 00:36:27.352 "data_size": 63488 00:36:27.352 }, 00:36:27.352 { 00:36:27.352 "name": "BaseBdev4", 00:36:27.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:27.352 "is_configured": false, 00:36:27.352 "data_offset": 0, 00:36:27.352 "data_size": 0 00:36:27.352 } 00:36:27.352 ] 00:36:27.352 }' 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:27.352 17:33:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.611 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:27.611 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.611 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.871 [2024-11-26 17:33:28.305350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:27.871 [2024-11-26 17:33:28.305743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:27.871 [2024-11-26 17:33:28.305805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:36:27.871 [2024-11-26 17:33:28.306134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:27.871 BaseBdev4 00:36:27.871 [2024-11-26 17:33:28.306347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:27.871 [2024-11-26 17:33:28.306398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:27.871 [2024-11-26 17:33:28.306608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.871 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.871 [ 00:36:27.871 { 00:36:27.871 "name": "BaseBdev4", 00:36:27.871 "aliases": [ 00:36:27.871 "18d65344-5404-47f5-8985-c38abb52a5b4" 00:36:27.871 ], 00:36:27.871 "product_name": "Malloc disk", 00:36:27.871 "block_size": 512, 00:36:27.871 "num_blocks": 65536, 00:36:27.871 "uuid": "18d65344-5404-47f5-8985-c38abb52a5b4", 00:36:27.871 "assigned_rate_limits": { 00:36:27.871 "rw_ios_per_sec": 0, 00:36:27.871 "rw_mbytes_per_sec": 0, 00:36:27.871 "r_mbytes_per_sec": 0, 00:36:27.871 "w_mbytes_per_sec": 0 00:36:27.871 }, 00:36:27.871 "claimed": true, 00:36:27.871 "claim_type": "exclusive_write", 00:36:27.871 "zoned": false, 00:36:27.871 "supported_io_types": { 00:36:27.871 "read": true, 00:36:27.871 "write": true, 00:36:27.871 "unmap": true, 00:36:27.871 "flush": true, 00:36:27.871 "reset": true, 00:36:27.871 "nvme_admin": false, 00:36:27.871 "nvme_io": false, 00:36:27.871 "nvme_io_md": false, 00:36:27.871 "write_zeroes": true, 00:36:27.871 "zcopy": true, 00:36:27.871 "get_zone_info": false, 00:36:27.871 "zone_management": false, 00:36:27.871 "zone_append": false, 00:36:27.871 "compare": false, 00:36:27.871 "compare_and_write": false, 00:36:27.871 "abort": true, 00:36:27.871 "seek_hole": false, 00:36:27.871 "seek_data": false, 00:36:27.871 "copy": true, 00:36:27.871 "nvme_iov_md": false 00:36:27.871 }, 00:36:27.871 "memory_domains": [ 00:36:27.871 { 00:36:27.871 "dma_device_id": "system", 00:36:27.871 "dma_device_type": 1 00:36:27.871 }, 00:36:27.871 { 00:36:27.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:27.871 "dma_device_type": 2 00:36:27.871 } 00:36:27.871 ], 00:36:27.871 "driver_specific": {} 00:36:27.871 } 00:36:27.872 ] 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:27.872 "name": "Existed_Raid", 00:36:27.872 "uuid": "9b1e1aa7-b1a9-4c56-800c-8d988a746261", 00:36:27.872 "strip_size_kb": 64, 00:36:27.872 "state": "online", 00:36:27.872 "raid_level": "concat", 00:36:27.872 "superblock": true, 00:36:27.872 "num_base_bdevs": 4, 00:36:27.872 "num_base_bdevs_discovered": 4, 00:36:27.872 "num_base_bdevs_operational": 4, 00:36:27.872 "base_bdevs_list": [ 00:36:27.872 { 00:36:27.872 "name": "BaseBdev1", 00:36:27.872 "uuid": "8b880740-590b-4ac8-98c1-5bdfc5426845", 00:36:27.872 "is_configured": true, 00:36:27.872 "data_offset": 2048, 00:36:27.872 "data_size": 63488 00:36:27.872 }, 00:36:27.872 { 00:36:27.872 "name": "BaseBdev2", 00:36:27.872 "uuid": "36464149-4f1a-43f7-a418-b0f8b5d473ec", 00:36:27.872 "is_configured": true, 00:36:27.872 "data_offset": 2048, 00:36:27.872 "data_size": 63488 00:36:27.872 }, 00:36:27.872 { 00:36:27.872 "name": "BaseBdev3", 00:36:27.872 "uuid": "a5fa246d-a21d-4718-a1e7-97a565973698", 00:36:27.872 "is_configured": true, 00:36:27.872 "data_offset": 2048, 00:36:27.872 "data_size": 63488 00:36:27.872 }, 00:36:27.872 { 00:36:27.872 "name": "BaseBdev4", 00:36:27.872 "uuid": "18d65344-5404-47f5-8985-c38abb52a5b4", 00:36:27.872 "is_configured": true, 00:36:27.872 "data_offset": 2048, 00:36:27.872 "data_size": 63488 00:36:27.872 } 00:36:27.872 ] 00:36:27.872 }' 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:27.872 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.131 [2024-11-26 17:33:28.781006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.131 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:28.131 "name": "Existed_Raid", 00:36:28.131 "aliases": [ 00:36:28.131 "9b1e1aa7-b1a9-4c56-800c-8d988a746261" 00:36:28.131 ], 00:36:28.131 "product_name": "Raid Volume", 00:36:28.131 "block_size": 512, 00:36:28.131 "num_blocks": 253952, 00:36:28.131 "uuid": "9b1e1aa7-b1a9-4c56-800c-8d988a746261", 00:36:28.131 "assigned_rate_limits": { 00:36:28.131 "rw_ios_per_sec": 0, 00:36:28.131 "rw_mbytes_per_sec": 0, 00:36:28.131 "r_mbytes_per_sec": 0, 00:36:28.131 "w_mbytes_per_sec": 0 00:36:28.131 }, 00:36:28.131 "claimed": false, 00:36:28.131 "zoned": false, 00:36:28.131 "supported_io_types": { 00:36:28.131 "read": true, 00:36:28.131 "write": true, 00:36:28.131 "unmap": true, 00:36:28.131 "flush": true, 00:36:28.131 "reset": true, 00:36:28.131 "nvme_admin": false, 00:36:28.131 "nvme_io": false, 00:36:28.131 "nvme_io_md": false, 00:36:28.131 "write_zeroes": true, 00:36:28.131 "zcopy": false, 00:36:28.131 "get_zone_info": false, 00:36:28.131 "zone_management": false, 00:36:28.131 "zone_append": false, 00:36:28.131 "compare": false, 00:36:28.131 "compare_and_write": false, 00:36:28.131 "abort": false, 00:36:28.131 "seek_hole": false, 00:36:28.131 "seek_data": false, 00:36:28.131 "copy": false, 00:36:28.131 "nvme_iov_md": false 00:36:28.131 }, 00:36:28.131 "memory_domains": [ 00:36:28.131 { 00:36:28.131 "dma_device_id": "system", 00:36:28.131 "dma_device_type": 1 00:36:28.131 }, 00:36:28.131 { 00:36:28.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:28.131 "dma_device_type": 2 00:36:28.131 }, 00:36:28.131 { 00:36:28.131 "dma_device_id": "system", 00:36:28.131 "dma_device_type": 1 00:36:28.131 }, 00:36:28.131 { 00:36:28.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:28.131 "dma_device_type": 2 00:36:28.131 }, 00:36:28.131 { 00:36:28.131 "dma_device_id": "system", 00:36:28.131 "dma_device_type": 1 00:36:28.131 }, 00:36:28.131 { 00:36:28.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:28.131 "dma_device_type": 2 00:36:28.131 }, 00:36:28.131 { 00:36:28.131 "dma_device_id": "system", 00:36:28.131 "dma_device_type": 1 00:36:28.131 }, 00:36:28.131 { 00:36:28.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:28.131 "dma_device_type": 2 00:36:28.131 } 00:36:28.131 ], 00:36:28.131 "driver_specific": { 00:36:28.131 "raid": { 00:36:28.131 "uuid": "9b1e1aa7-b1a9-4c56-800c-8d988a746261", 00:36:28.131 "strip_size_kb": 64, 00:36:28.131 "state": "online", 00:36:28.131 "raid_level": "concat", 00:36:28.131 "superblock": true, 00:36:28.131 "num_base_bdevs": 4, 00:36:28.131 "num_base_bdevs_discovered": 4, 00:36:28.131 "num_base_bdevs_operational": 4, 00:36:28.131 "base_bdevs_list": [ 00:36:28.131 { 00:36:28.131 "name": "BaseBdev1", 00:36:28.131 "uuid": "8b880740-590b-4ac8-98c1-5bdfc5426845", 00:36:28.131 "is_configured": true, 00:36:28.131 "data_offset": 2048, 00:36:28.131 "data_size": 63488 00:36:28.131 }, 00:36:28.132 { 00:36:28.132 "name": "BaseBdev2", 00:36:28.132 "uuid": "36464149-4f1a-43f7-a418-b0f8b5d473ec", 00:36:28.132 "is_configured": true, 00:36:28.132 "data_offset": 2048, 00:36:28.132 "data_size": 63488 00:36:28.132 }, 00:36:28.132 { 00:36:28.132 "name": "BaseBdev3", 00:36:28.132 "uuid": "a5fa246d-a21d-4718-a1e7-97a565973698", 00:36:28.132 "is_configured": true, 00:36:28.132 "data_offset": 2048, 00:36:28.132 "data_size": 63488 00:36:28.132 }, 00:36:28.132 { 00:36:28.132 "name": "BaseBdev4", 00:36:28.132 "uuid": "18d65344-5404-47f5-8985-c38abb52a5b4", 00:36:28.132 "is_configured": true, 00:36:28.132 "data_offset": 2048, 00:36:28.132 "data_size": 63488 00:36:28.132 } 00:36:28.132 ] 00:36:28.132 } 00:36:28.132 } 00:36:28.132 }' 00:36:28.391 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:28.391 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:28.391 BaseBdev2 00:36:28.391 BaseBdev3 00:36:28.391 BaseBdev4' 00:36:28.391 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:28.391 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:28.391 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:28.391 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.392 17:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.392 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.651 [2024-11-26 17:33:29.116211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:28.651 [2024-11-26 17:33:29.116247] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:28.651 [2024-11-26 17:33:29.116303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:28.651 "name": "Existed_Raid", 00:36:28.651 "uuid": "9b1e1aa7-b1a9-4c56-800c-8d988a746261", 00:36:28.651 "strip_size_kb": 64, 00:36:28.651 "state": "offline", 00:36:28.651 "raid_level": "concat", 00:36:28.651 "superblock": true, 00:36:28.651 "num_base_bdevs": 4, 00:36:28.651 "num_base_bdevs_discovered": 3, 00:36:28.651 "num_base_bdevs_operational": 3, 00:36:28.651 "base_bdevs_list": [ 00:36:28.651 { 00:36:28.651 "name": null, 00:36:28.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:28.651 "is_configured": false, 00:36:28.651 "data_offset": 0, 00:36:28.651 "data_size": 63488 00:36:28.651 }, 00:36:28.651 { 00:36:28.651 "name": "BaseBdev2", 00:36:28.651 "uuid": "36464149-4f1a-43f7-a418-b0f8b5d473ec", 00:36:28.651 "is_configured": true, 00:36:28.651 "data_offset": 2048, 00:36:28.651 "data_size": 63488 00:36:28.651 }, 00:36:28.651 { 00:36:28.651 "name": "BaseBdev3", 00:36:28.651 "uuid": "a5fa246d-a21d-4718-a1e7-97a565973698", 00:36:28.651 "is_configured": true, 00:36:28.651 "data_offset": 2048, 00:36:28.651 "data_size": 63488 00:36:28.651 }, 00:36:28.651 { 00:36:28.651 "name": "BaseBdev4", 00:36:28.651 "uuid": "18d65344-5404-47f5-8985-c38abb52a5b4", 00:36:28.651 "is_configured": true, 00:36:28.651 "data_offset": 2048, 00:36:28.651 "data_size": 63488 00:36:28.651 } 00:36:28.651 ] 00:36:28.651 }' 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:28.651 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.219 [2024-11-26 17:33:29.693134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.219 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.219 [2024-11-26 17:33:29.876125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:29.481 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.481 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:29.481 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:29.481 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:29.481 17:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:29.481 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.481 17:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.481 [2024-11-26 17:33:30.052160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:36:29.481 [2024-11-26 17:33:30.052316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.481 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.741 BaseBdev2 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.741 [ 00:36:29.741 { 00:36:29.741 "name": "BaseBdev2", 00:36:29.741 "aliases": [ 00:36:29.741 "8d56a6df-2dbc-4791-8f61-bb2081e61332" 00:36:29.741 ], 00:36:29.741 "product_name": "Malloc disk", 00:36:29.741 "block_size": 512, 00:36:29.741 "num_blocks": 65536, 00:36:29.741 "uuid": "8d56a6df-2dbc-4791-8f61-bb2081e61332", 00:36:29.741 "assigned_rate_limits": { 00:36:29.741 "rw_ios_per_sec": 0, 00:36:29.741 "rw_mbytes_per_sec": 0, 00:36:29.741 "r_mbytes_per_sec": 0, 00:36:29.741 "w_mbytes_per_sec": 0 00:36:29.741 }, 00:36:29.741 "claimed": false, 00:36:29.741 "zoned": false, 00:36:29.741 "supported_io_types": { 00:36:29.741 "read": true, 00:36:29.741 "write": true, 00:36:29.741 "unmap": true, 00:36:29.741 "flush": true, 00:36:29.741 "reset": true, 00:36:29.741 "nvme_admin": false, 00:36:29.741 "nvme_io": false, 00:36:29.741 "nvme_io_md": false, 00:36:29.741 "write_zeroes": true, 00:36:29.741 "zcopy": true, 00:36:29.741 "get_zone_info": false, 00:36:29.741 "zone_management": false, 00:36:29.741 "zone_append": false, 00:36:29.741 "compare": false, 00:36:29.741 "compare_and_write": false, 00:36:29.741 "abort": true, 00:36:29.741 "seek_hole": false, 00:36:29.741 "seek_data": false, 00:36:29.741 "copy": true, 00:36:29.741 "nvme_iov_md": false 00:36:29.741 }, 00:36:29.741 "memory_domains": [ 00:36:29.741 { 00:36:29.741 "dma_device_id": "system", 00:36:29.741 "dma_device_type": 1 00:36:29.741 }, 00:36:29.741 { 00:36:29.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:29.741 "dma_device_type": 2 00:36:29.741 } 00:36:29.741 ], 00:36:29.741 "driver_specific": {} 00:36:29.741 } 00:36:29.741 ] 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.741 BaseBdev3 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:29.741 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.742 [ 00:36:29.742 { 00:36:29.742 "name": "BaseBdev3", 00:36:29.742 "aliases": [ 00:36:29.742 "3a47abbf-fde7-4226-b08f-dda55c9bd568" 00:36:29.742 ], 00:36:29.742 "product_name": "Malloc disk", 00:36:29.742 "block_size": 512, 00:36:29.742 "num_blocks": 65536, 00:36:29.742 "uuid": "3a47abbf-fde7-4226-b08f-dda55c9bd568", 00:36:29.742 "assigned_rate_limits": { 00:36:29.742 "rw_ios_per_sec": 0, 00:36:29.742 "rw_mbytes_per_sec": 0, 00:36:29.742 "r_mbytes_per_sec": 0, 00:36:29.742 "w_mbytes_per_sec": 0 00:36:29.742 }, 00:36:29.742 "claimed": false, 00:36:29.742 "zoned": false, 00:36:29.742 "supported_io_types": { 00:36:29.742 "read": true, 00:36:29.742 "write": true, 00:36:29.742 "unmap": true, 00:36:29.742 "flush": true, 00:36:29.742 "reset": true, 00:36:29.742 "nvme_admin": false, 00:36:29.742 "nvme_io": false, 00:36:29.742 "nvme_io_md": false, 00:36:29.742 "write_zeroes": true, 00:36:29.742 "zcopy": true, 00:36:29.742 "get_zone_info": false, 00:36:29.742 "zone_management": false, 00:36:29.742 "zone_append": false, 00:36:29.742 "compare": false, 00:36:29.742 "compare_and_write": false, 00:36:29.742 "abort": true, 00:36:29.742 "seek_hole": false, 00:36:29.742 "seek_data": false, 00:36:29.742 "copy": true, 00:36:29.742 "nvme_iov_md": false 00:36:29.742 }, 00:36:29.742 "memory_domains": [ 00:36:29.742 { 00:36:29.742 "dma_device_id": "system", 00:36:29.742 "dma_device_type": 1 00:36:29.742 }, 00:36:29.742 { 00:36:29.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:29.742 "dma_device_type": 2 00:36:29.742 } 00:36:29.742 ], 00:36:29.742 "driver_specific": {} 00:36:29.742 } 00:36:29.742 ] 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:29.742 BaseBdev4 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.742 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.001 [ 00:36:30.001 { 00:36:30.001 "name": "BaseBdev4", 00:36:30.001 "aliases": [ 00:36:30.001 "ee78f61e-f376-4904-a213-6e278b51434e" 00:36:30.001 ], 00:36:30.001 "product_name": "Malloc disk", 00:36:30.001 "block_size": 512, 00:36:30.001 "num_blocks": 65536, 00:36:30.001 "uuid": "ee78f61e-f376-4904-a213-6e278b51434e", 00:36:30.001 "assigned_rate_limits": { 00:36:30.001 "rw_ios_per_sec": 0, 00:36:30.001 "rw_mbytes_per_sec": 0, 00:36:30.001 "r_mbytes_per_sec": 0, 00:36:30.001 "w_mbytes_per_sec": 0 00:36:30.001 }, 00:36:30.001 "claimed": false, 00:36:30.001 "zoned": false, 00:36:30.001 "supported_io_types": { 00:36:30.001 "read": true, 00:36:30.001 "write": true, 00:36:30.001 "unmap": true, 00:36:30.001 "flush": true, 00:36:30.001 "reset": true, 00:36:30.001 "nvme_admin": false, 00:36:30.001 "nvme_io": false, 00:36:30.001 "nvme_io_md": false, 00:36:30.001 "write_zeroes": true, 00:36:30.001 "zcopy": true, 00:36:30.001 "get_zone_info": false, 00:36:30.001 "zone_management": false, 00:36:30.001 "zone_append": false, 00:36:30.001 "compare": false, 00:36:30.001 "compare_and_write": false, 00:36:30.001 "abort": true, 00:36:30.001 "seek_hole": false, 00:36:30.001 "seek_data": false, 00:36:30.001 "copy": true, 00:36:30.001 "nvme_iov_md": false 00:36:30.001 }, 00:36:30.001 "memory_domains": [ 00:36:30.001 { 00:36:30.001 "dma_device_id": "system", 00:36:30.001 "dma_device_type": 1 00:36:30.001 }, 00:36:30.001 { 00:36:30.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:30.001 "dma_device_type": 2 00:36:30.001 } 00:36:30.001 ], 00:36:30.001 "driver_specific": {} 00:36:30.001 } 00:36:30.001 ] 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.001 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.001 [2024-11-26 17:33:30.465928] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:30.001 [2024-11-26 17:33:30.466048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:30.001 [2024-11-26 17:33:30.466116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:30.001 [2024-11-26 17:33:30.468331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:30.002 [2024-11-26 17:33:30.468443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:30.002 "name": "Existed_Raid", 00:36:30.002 "uuid": "c7d5f602-b6dd-4f90-9b3a-9a587c448365", 00:36:30.002 "strip_size_kb": 64, 00:36:30.002 "state": "configuring", 00:36:30.002 "raid_level": "concat", 00:36:30.002 "superblock": true, 00:36:30.002 "num_base_bdevs": 4, 00:36:30.002 "num_base_bdevs_discovered": 3, 00:36:30.002 "num_base_bdevs_operational": 4, 00:36:30.002 "base_bdevs_list": [ 00:36:30.002 { 00:36:30.002 "name": "BaseBdev1", 00:36:30.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.002 "is_configured": false, 00:36:30.002 "data_offset": 0, 00:36:30.002 "data_size": 0 00:36:30.002 }, 00:36:30.002 { 00:36:30.002 "name": "BaseBdev2", 00:36:30.002 "uuid": "8d56a6df-2dbc-4791-8f61-bb2081e61332", 00:36:30.002 "is_configured": true, 00:36:30.002 "data_offset": 2048, 00:36:30.002 "data_size": 63488 00:36:30.002 }, 00:36:30.002 { 00:36:30.002 "name": "BaseBdev3", 00:36:30.002 "uuid": "3a47abbf-fde7-4226-b08f-dda55c9bd568", 00:36:30.002 "is_configured": true, 00:36:30.002 "data_offset": 2048, 00:36:30.002 "data_size": 63488 00:36:30.002 }, 00:36:30.002 { 00:36:30.002 "name": "BaseBdev4", 00:36:30.002 "uuid": "ee78f61e-f376-4904-a213-6e278b51434e", 00:36:30.002 "is_configured": true, 00:36:30.002 "data_offset": 2048, 00:36:30.002 "data_size": 63488 00:36:30.002 } 00:36:30.002 ] 00:36:30.002 }' 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:30.002 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.260 [2024-11-26 17:33:30.893222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.260 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:30.260 "name": "Existed_Raid", 00:36:30.260 "uuid": "c7d5f602-b6dd-4f90-9b3a-9a587c448365", 00:36:30.260 "strip_size_kb": 64, 00:36:30.260 "state": "configuring", 00:36:30.260 "raid_level": "concat", 00:36:30.260 "superblock": true, 00:36:30.260 "num_base_bdevs": 4, 00:36:30.260 "num_base_bdevs_discovered": 2, 00:36:30.260 "num_base_bdevs_operational": 4, 00:36:30.260 "base_bdevs_list": [ 00:36:30.260 { 00:36:30.260 "name": "BaseBdev1", 00:36:30.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.260 "is_configured": false, 00:36:30.260 "data_offset": 0, 00:36:30.260 "data_size": 0 00:36:30.260 }, 00:36:30.260 { 00:36:30.260 "name": null, 00:36:30.260 "uuid": "8d56a6df-2dbc-4791-8f61-bb2081e61332", 00:36:30.260 "is_configured": false, 00:36:30.260 "data_offset": 0, 00:36:30.260 "data_size": 63488 00:36:30.260 }, 00:36:30.260 { 00:36:30.260 "name": "BaseBdev3", 00:36:30.260 "uuid": "3a47abbf-fde7-4226-b08f-dda55c9bd568", 00:36:30.260 "is_configured": true, 00:36:30.260 "data_offset": 2048, 00:36:30.261 "data_size": 63488 00:36:30.261 }, 00:36:30.261 { 00:36:30.261 "name": "BaseBdev4", 00:36:30.261 "uuid": "ee78f61e-f376-4904-a213-6e278b51434e", 00:36:30.261 "is_configured": true, 00:36:30.261 "data_offset": 2048, 00:36:30.261 "data_size": 63488 00:36:30.261 } 00:36:30.261 ] 00:36:30.261 }' 00:36:30.261 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:30.520 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.779 [2024-11-26 17:33:31.430321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:30.779 BaseBdev1 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:30.779 [ 00:36:30.779 { 00:36:30.779 "name": "BaseBdev1", 00:36:30.779 "aliases": [ 00:36:30.779 "08e33813-951d-4392-9678-4bd4a04feca1" 00:36:30.779 ], 00:36:30.779 "product_name": "Malloc disk", 00:36:30.779 "block_size": 512, 00:36:30.779 "num_blocks": 65536, 00:36:30.779 "uuid": "08e33813-951d-4392-9678-4bd4a04feca1", 00:36:30.779 "assigned_rate_limits": { 00:36:30.779 "rw_ios_per_sec": 0, 00:36:30.779 "rw_mbytes_per_sec": 0, 00:36:30.779 "r_mbytes_per_sec": 0, 00:36:30.779 "w_mbytes_per_sec": 0 00:36:30.779 }, 00:36:30.779 "claimed": true, 00:36:30.779 "claim_type": "exclusive_write", 00:36:30.779 "zoned": false, 00:36:30.779 "supported_io_types": { 00:36:30.779 "read": true, 00:36:30.779 "write": true, 00:36:30.779 "unmap": true, 00:36:30.779 "flush": true, 00:36:30.779 "reset": true, 00:36:30.779 "nvme_admin": false, 00:36:30.779 "nvme_io": false, 00:36:30.779 "nvme_io_md": false, 00:36:30.779 "write_zeroes": true, 00:36:30.779 "zcopy": true, 00:36:30.779 "get_zone_info": false, 00:36:30.779 "zone_management": false, 00:36:30.779 "zone_append": false, 00:36:30.779 "compare": false, 00:36:30.779 "compare_and_write": false, 00:36:30.779 "abort": true, 00:36:30.779 "seek_hole": false, 00:36:30.779 "seek_data": false, 00:36:30.779 "copy": true, 00:36:30.779 "nvme_iov_md": false 00:36:30.779 }, 00:36:30.779 "memory_domains": [ 00:36:30.779 { 00:36:30.779 "dma_device_id": "system", 00:36:30.779 "dma_device_type": 1 00:36:30.779 }, 00:36:30.779 { 00:36:30.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:30.779 "dma_device_type": 2 00:36:30.779 } 00:36:30.779 ], 00:36:30.779 "driver_specific": {} 00:36:30.779 } 00:36:30.779 ] 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.779 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:30.780 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:30.780 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:30.780 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:30.780 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:30.780 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:30.780 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:30.780 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:30.780 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:30.780 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:30.780 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:31.038 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.038 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:31.038 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.039 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.039 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.039 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:31.039 "name": "Existed_Raid", 00:36:31.039 "uuid": "c7d5f602-b6dd-4f90-9b3a-9a587c448365", 00:36:31.039 "strip_size_kb": 64, 00:36:31.039 "state": "configuring", 00:36:31.039 "raid_level": "concat", 00:36:31.039 "superblock": true, 00:36:31.039 "num_base_bdevs": 4, 00:36:31.039 "num_base_bdevs_discovered": 3, 00:36:31.039 "num_base_bdevs_operational": 4, 00:36:31.039 "base_bdevs_list": [ 00:36:31.039 { 00:36:31.039 "name": "BaseBdev1", 00:36:31.039 "uuid": "08e33813-951d-4392-9678-4bd4a04feca1", 00:36:31.039 "is_configured": true, 00:36:31.039 "data_offset": 2048, 00:36:31.039 "data_size": 63488 00:36:31.039 }, 00:36:31.039 { 00:36:31.039 "name": null, 00:36:31.039 "uuid": "8d56a6df-2dbc-4791-8f61-bb2081e61332", 00:36:31.039 "is_configured": false, 00:36:31.039 "data_offset": 0, 00:36:31.039 "data_size": 63488 00:36:31.039 }, 00:36:31.039 { 00:36:31.039 "name": "BaseBdev3", 00:36:31.039 "uuid": "3a47abbf-fde7-4226-b08f-dda55c9bd568", 00:36:31.039 "is_configured": true, 00:36:31.039 "data_offset": 2048, 00:36:31.039 "data_size": 63488 00:36:31.039 }, 00:36:31.039 { 00:36:31.039 "name": "BaseBdev4", 00:36:31.039 "uuid": "ee78f61e-f376-4904-a213-6e278b51434e", 00:36:31.039 "is_configured": true, 00:36:31.039 "data_offset": 2048, 00:36:31.039 "data_size": 63488 00:36:31.039 } 00:36:31.039 ] 00:36:31.039 }' 00:36:31.039 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:31.039 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.298 [2024-11-26 17:33:31.953567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.298 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.557 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:31.557 "name": "Existed_Raid", 00:36:31.557 "uuid": "c7d5f602-b6dd-4f90-9b3a-9a587c448365", 00:36:31.557 "strip_size_kb": 64, 00:36:31.557 "state": "configuring", 00:36:31.557 "raid_level": "concat", 00:36:31.557 "superblock": true, 00:36:31.557 "num_base_bdevs": 4, 00:36:31.557 "num_base_bdevs_discovered": 2, 00:36:31.557 "num_base_bdevs_operational": 4, 00:36:31.557 "base_bdevs_list": [ 00:36:31.557 { 00:36:31.557 "name": "BaseBdev1", 00:36:31.557 "uuid": "08e33813-951d-4392-9678-4bd4a04feca1", 00:36:31.557 "is_configured": true, 00:36:31.557 "data_offset": 2048, 00:36:31.557 "data_size": 63488 00:36:31.557 }, 00:36:31.557 { 00:36:31.557 "name": null, 00:36:31.557 "uuid": "8d56a6df-2dbc-4791-8f61-bb2081e61332", 00:36:31.557 "is_configured": false, 00:36:31.557 "data_offset": 0, 00:36:31.557 "data_size": 63488 00:36:31.557 }, 00:36:31.557 { 00:36:31.557 "name": null, 00:36:31.557 "uuid": "3a47abbf-fde7-4226-b08f-dda55c9bd568", 00:36:31.557 "is_configured": false, 00:36:31.557 "data_offset": 0, 00:36:31.557 "data_size": 63488 00:36:31.557 }, 00:36:31.557 { 00:36:31.557 "name": "BaseBdev4", 00:36:31.557 "uuid": "ee78f61e-f376-4904-a213-6e278b51434e", 00:36:31.557 "is_configured": true, 00:36:31.557 "data_offset": 2048, 00:36:31.557 "data_size": 63488 00:36:31.557 } 00:36:31.557 ] 00:36:31.557 }' 00:36:31.557 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:31.557 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:31.815 [2024-11-26 17:33:32.476700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.815 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.076 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.076 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:32.076 "name": "Existed_Raid", 00:36:32.076 "uuid": "c7d5f602-b6dd-4f90-9b3a-9a587c448365", 00:36:32.076 "strip_size_kb": 64, 00:36:32.076 "state": "configuring", 00:36:32.076 "raid_level": "concat", 00:36:32.076 "superblock": true, 00:36:32.076 "num_base_bdevs": 4, 00:36:32.076 "num_base_bdevs_discovered": 3, 00:36:32.076 "num_base_bdevs_operational": 4, 00:36:32.076 "base_bdevs_list": [ 00:36:32.076 { 00:36:32.076 "name": "BaseBdev1", 00:36:32.076 "uuid": "08e33813-951d-4392-9678-4bd4a04feca1", 00:36:32.076 "is_configured": true, 00:36:32.076 "data_offset": 2048, 00:36:32.076 "data_size": 63488 00:36:32.076 }, 00:36:32.076 { 00:36:32.076 "name": null, 00:36:32.076 "uuid": "8d56a6df-2dbc-4791-8f61-bb2081e61332", 00:36:32.076 "is_configured": false, 00:36:32.076 "data_offset": 0, 00:36:32.076 "data_size": 63488 00:36:32.076 }, 00:36:32.076 { 00:36:32.076 "name": "BaseBdev3", 00:36:32.076 "uuid": "3a47abbf-fde7-4226-b08f-dda55c9bd568", 00:36:32.076 "is_configured": true, 00:36:32.076 "data_offset": 2048, 00:36:32.076 "data_size": 63488 00:36:32.076 }, 00:36:32.076 { 00:36:32.076 "name": "BaseBdev4", 00:36:32.076 "uuid": "ee78f61e-f376-4904-a213-6e278b51434e", 00:36:32.076 "is_configured": true, 00:36:32.076 "data_offset": 2048, 00:36:32.076 "data_size": 63488 00:36:32.076 } 00:36:32.076 ] 00:36:32.076 }' 00:36:32.076 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:32.076 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.337 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:32.337 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:32.337 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.337 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.337 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.337 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:32.337 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:32.337 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.337 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.337 [2024-11-26 17:33:32.967934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.598 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:32.599 "name": "Existed_Raid", 00:36:32.599 "uuid": "c7d5f602-b6dd-4f90-9b3a-9a587c448365", 00:36:32.599 "strip_size_kb": 64, 00:36:32.599 "state": "configuring", 00:36:32.599 "raid_level": "concat", 00:36:32.599 "superblock": true, 00:36:32.599 "num_base_bdevs": 4, 00:36:32.599 "num_base_bdevs_discovered": 2, 00:36:32.599 "num_base_bdevs_operational": 4, 00:36:32.599 "base_bdevs_list": [ 00:36:32.599 { 00:36:32.599 "name": null, 00:36:32.599 "uuid": "08e33813-951d-4392-9678-4bd4a04feca1", 00:36:32.599 "is_configured": false, 00:36:32.599 "data_offset": 0, 00:36:32.599 "data_size": 63488 00:36:32.599 }, 00:36:32.599 { 00:36:32.599 "name": null, 00:36:32.599 "uuid": "8d56a6df-2dbc-4791-8f61-bb2081e61332", 00:36:32.599 "is_configured": false, 00:36:32.599 "data_offset": 0, 00:36:32.599 "data_size": 63488 00:36:32.599 }, 00:36:32.599 { 00:36:32.599 "name": "BaseBdev3", 00:36:32.599 "uuid": "3a47abbf-fde7-4226-b08f-dda55c9bd568", 00:36:32.599 "is_configured": true, 00:36:32.599 "data_offset": 2048, 00:36:32.599 "data_size": 63488 00:36:32.599 }, 00:36:32.599 { 00:36:32.599 "name": "BaseBdev4", 00:36:32.599 "uuid": "ee78f61e-f376-4904-a213-6e278b51434e", 00:36:32.599 "is_configured": true, 00:36:32.599 "data_offset": 2048, 00:36:32.599 "data_size": 63488 00:36:32.599 } 00:36:32.599 ] 00:36:32.599 }' 00:36:32.599 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:32.599 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.857 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:32.857 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:32.858 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.858 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:32.858 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.117 [2024-11-26 17:33:33.565302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:33.117 "name": "Existed_Raid", 00:36:33.117 "uuid": "c7d5f602-b6dd-4f90-9b3a-9a587c448365", 00:36:33.117 "strip_size_kb": 64, 00:36:33.117 "state": "configuring", 00:36:33.117 "raid_level": "concat", 00:36:33.117 "superblock": true, 00:36:33.117 "num_base_bdevs": 4, 00:36:33.117 "num_base_bdevs_discovered": 3, 00:36:33.117 "num_base_bdevs_operational": 4, 00:36:33.117 "base_bdevs_list": [ 00:36:33.117 { 00:36:33.117 "name": null, 00:36:33.117 "uuid": "08e33813-951d-4392-9678-4bd4a04feca1", 00:36:33.117 "is_configured": false, 00:36:33.117 "data_offset": 0, 00:36:33.117 "data_size": 63488 00:36:33.117 }, 00:36:33.117 { 00:36:33.117 "name": "BaseBdev2", 00:36:33.117 "uuid": "8d56a6df-2dbc-4791-8f61-bb2081e61332", 00:36:33.117 "is_configured": true, 00:36:33.117 "data_offset": 2048, 00:36:33.117 "data_size": 63488 00:36:33.117 }, 00:36:33.117 { 00:36:33.117 "name": "BaseBdev3", 00:36:33.117 "uuid": "3a47abbf-fde7-4226-b08f-dda55c9bd568", 00:36:33.117 "is_configured": true, 00:36:33.117 "data_offset": 2048, 00:36:33.117 "data_size": 63488 00:36:33.117 }, 00:36:33.117 { 00:36:33.117 "name": "BaseBdev4", 00:36:33.117 "uuid": "ee78f61e-f376-4904-a213-6e278b51434e", 00:36:33.117 "is_configured": true, 00:36:33.117 "data_offset": 2048, 00:36:33.117 "data_size": 63488 00:36:33.117 } 00:36:33.117 ] 00:36:33.117 }' 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:33.117 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.375 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:33.375 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:33.375 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.375 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.375 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.375 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:33.375 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:33.375 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.375 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.375 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:33.375 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.375 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 08e33813-951d-4392-9678-4bd4a04feca1 00:36:33.375 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.375 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.633 [2024-11-26 17:33:34.091438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:33.633 [2024-11-26 17:33:34.091789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:33.633 NewBaseBdev 00:36:33.633 [2024-11-26 17:33:34.091847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:36:33.633 [2024-11-26 17:33:34.092156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:36:33.633 [2024-11-26 17:33:34.092321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:33.633 [2024-11-26 17:33:34.092335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:36:33.633 [2024-11-26 17:33:34.092485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.633 [ 00:36:33.633 { 00:36:33.633 "name": "NewBaseBdev", 00:36:33.633 "aliases": [ 00:36:33.633 "08e33813-951d-4392-9678-4bd4a04feca1" 00:36:33.633 ], 00:36:33.633 "product_name": "Malloc disk", 00:36:33.633 "block_size": 512, 00:36:33.633 "num_blocks": 65536, 00:36:33.633 "uuid": "08e33813-951d-4392-9678-4bd4a04feca1", 00:36:33.633 "assigned_rate_limits": { 00:36:33.633 "rw_ios_per_sec": 0, 00:36:33.633 "rw_mbytes_per_sec": 0, 00:36:33.633 "r_mbytes_per_sec": 0, 00:36:33.633 "w_mbytes_per_sec": 0 00:36:33.633 }, 00:36:33.633 "claimed": true, 00:36:33.633 "claim_type": "exclusive_write", 00:36:33.633 "zoned": false, 00:36:33.633 "supported_io_types": { 00:36:33.633 "read": true, 00:36:33.633 "write": true, 00:36:33.633 "unmap": true, 00:36:33.633 "flush": true, 00:36:33.633 "reset": true, 00:36:33.633 "nvme_admin": false, 00:36:33.633 "nvme_io": false, 00:36:33.633 "nvme_io_md": false, 00:36:33.633 "write_zeroes": true, 00:36:33.633 "zcopy": true, 00:36:33.633 "get_zone_info": false, 00:36:33.633 "zone_management": false, 00:36:33.633 "zone_append": false, 00:36:33.633 "compare": false, 00:36:33.633 "compare_and_write": false, 00:36:33.633 "abort": true, 00:36:33.633 "seek_hole": false, 00:36:33.633 "seek_data": false, 00:36:33.633 "copy": true, 00:36:33.633 "nvme_iov_md": false 00:36:33.633 }, 00:36:33.633 "memory_domains": [ 00:36:33.633 { 00:36:33.633 "dma_device_id": "system", 00:36:33.633 "dma_device_type": 1 00:36:33.633 }, 00:36:33.633 { 00:36:33.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:33.633 "dma_device_type": 2 00:36:33.633 } 00:36:33.633 ], 00:36:33.633 "driver_specific": {} 00:36:33.633 } 00:36:33.633 ] 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:33.633 "name": "Existed_Raid", 00:36:33.633 "uuid": "c7d5f602-b6dd-4f90-9b3a-9a587c448365", 00:36:33.633 "strip_size_kb": 64, 00:36:33.633 "state": "online", 00:36:33.633 "raid_level": "concat", 00:36:33.633 "superblock": true, 00:36:33.633 "num_base_bdevs": 4, 00:36:33.633 "num_base_bdevs_discovered": 4, 00:36:33.633 "num_base_bdevs_operational": 4, 00:36:33.633 "base_bdevs_list": [ 00:36:33.633 { 00:36:33.633 "name": "NewBaseBdev", 00:36:33.633 "uuid": "08e33813-951d-4392-9678-4bd4a04feca1", 00:36:33.633 "is_configured": true, 00:36:33.633 "data_offset": 2048, 00:36:33.633 "data_size": 63488 00:36:33.633 }, 00:36:33.633 { 00:36:33.633 "name": "BaseBdev2", 00:36:33.633 "uuid": "8d56a6df-2dbc-4791-8f61-bb2081e61332", 00:36:33.633 "is_configured": true, 00:36:33.633 "data_offset": 2048, 00:36:33.633 "data_size": 63488 00:36:33.633 }, 00:36:33.633 { 00:36:33.633 "name": "BaseBdev3", 00:36:33.633 "uuid": "3a47abbf-fde7-4226-b08f-dda55c9bd568", 00:36:33.633 "is_configured": true, 00:36:33.633 "data_offset": 2048, 00:36:33.633 "data_size": 63488 00:36:33.633 }, 00:36:33.633 { 00:36:33.633 "name": "BaseBdev4", 00:36:33.633 "uuid": "ee78f61e-f376-4904-a213-6e278b51434e", 00:36:33.633 "is_configured": true, 00:36:33.633 "data_offset": 2048, 00:36:33.633 "data_size": 63488 00:36:33.633 } 00:36:33.633 ] 00:36:33.633 }' 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:33.633 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.216 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.217 [2024-11-26 17:33:34.623029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:34.217 "name": "Existed_Raid", 00:36:34.217 "aliases": [ 00:36:34.217 "c7d5f602-b6dd-4f90-9b3a-9a587c448365" 00:36:34.217 ], 00:36:34.217 "product_name": "Raid Volume", 00:36:34.217 "block_size": 512, 00:36:34.217 "num_blocks": 253952, 00:36:34.217 "uuid": "c7d5f602-b6dd-4f90-9b3a-9a587c448365", 00:36:34.217 "assigned_rate_limits": { 00:36:34.217 "rw_ios_per_sec": 0, 00:36:34.217 "rw_mbytes_per_sec": 0, 00:36:34.217 "r_mbytes_per_sec": 0, 00:36:34.217 "w_mbytes_per_sec": 0 00:36:34.217 }, 00:36:34.217 "claimed": false, 00:36:34.217 "zoned": false, 00:36:34.217 "supported_io_types": { 00:36:34.217 "read": true, 00:36:34.217 "write": true, 00:36:34.217 "unmap": true, 00:36:34.217 "flush": true, 00:36:34.217 "reset": true, 00:36:34.217 "nvme_admin": false, 00:36:34.217 "nvme_io": false, 00:36:34.217 "nvme_io_md": false, 00:36:34.217 "write_zeroes": true, 00:36:34.217 "zcopy": false, 00:36:34.217 "get_zone_info": false, 00:36:34.217 "zone_management": false, 00:36:34.217 "zone_append": false, 00:36:34.217 "compare": false, 00:36:34.217 "compare_and_write": false, 00:36:34.217 "abort": false, 00:36:34.217 "seek_hole": false, 00:36:34.217 "seek_data": false, 00:36:34.217 "copy": false, 00:36:34.217 "nvme_iov_md": false 00:36:34.217 }, 00:36:34.217 "memory_domains": [ 00:36:34.217 { 00:36:34.217 "dma_device_id": "system", 00:36:34.217 "dma_device_type": 1 00:36:34.217 }, 00:36:34.217 { 00:36:34.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:34.217 "dma_device_type": 2 00:36:34.217 }, 00:36:34.217 { 00:36:34.217 "dma_device_id": "system", 00:36:34.217 "dma_device_type": 1 00:36:34.217 }, 00:36:34.217 { 00:36:34.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:34.217 "dma_device_type": 2 00:36:34.217 }, 00:36:34.217 { 00:36:34.217 "dma_device_id": "system", 00:36:34.217 "dma_device_type": 1 00:36:34.217 }, 00:36:34.217 { 00:36:34.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:34.217 "dma_device_type": 2 00:36:34.217 }, 00:36:34.217 { 00:36:34.217 "dma_device_id": "system", 00:36:34.217 "dma_device_type": 1 00:36:34.217 }, 00:36:34.217 { 00:36:34.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:34.217 "dma_device_type": 2 00:36:34.217 } 00:36:34.217 ], 00:36:34.217 "driver_specific": { 00:36:34.217 "raid": { 00:36:34.217 "uuid": "c7d5f602-b6dd-4f90-9b3a-9a587c448365", 00:36:34.217 "strip_size_kb": 64, 00:36:34.217 "state": "online", 00:36:34.217 "raid_level": "concat", 00:36:34.217 "superblock": true, 00:36:34.217 "num_base_bdevs": 4, 00:36:34.217 "num_base_bdevs_discovered": 4, 00:36:34.217 "num_base_bdevs_operational": 4, 00:36:34.217 "base_bdevs_list": [ 00:36:34.217 { 00:36:34.217 "name": "NewBaseBdev", 00:36:34.217 "uuid": "08e33813-951d-4392-9678-4bd4a04feca1", 00:36:34.217 "is_configured": true, 00:36:34.217 "data_offset": 2048, 00:36:34.217 "data_size": 63488 00:36:34.217 }, 00:36:34.217 { 00:36:34.217 "name": "BaseBdev2", 00:36:34.217 "uuid": "8d56a6df-2dbc-4791-8f61-bb2081e61332", 00:36:34.217 "is_configured": true, 00:36:34.217 "data_offset": 2048, 00:36:34.217 "data_size": 63488 00:36:34.217 }, 00:36:34.217 { 00:36:34.217 "name": "BaseBdev3", 00:36:34.217 "uuid": "3a47abbf-fde7-4226-b08f-dda55c9bd568", 00:36:34.217 "is_configured": true, 00:36:34.217 "data_offset": 2048, 00:36:34.217 "data_size": 63488 00:36:34.217 }, 00:36:34.217 { 00:36:34.217 "name": "BaseBdev4", 00:36:34.217 "uuid": "ee78f61e-f376-4904-a213-6e278b51434e", 00:36:34.217 "is_configured": true, 00:36:34.217 "data_offset": 2048, 00:36:34.217 "data_size": 63488 00:36:34.217 } 00:36:34.217 ] 00:36:34.217 } 00:36:34.217 } 00:36:34.217 }' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:34.217 BaseBdev2 00:36:34.217 BaseBdev3 00:36:34.217 BaseBdev4' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.217 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.475 [2024-11-26 17:33:34.950078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:34.475 [2024-11-26 17:33:34.950156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:34.475 [2024-11-26 17:33:34.950275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:34.475 [2024-11-26 17:33:34.950385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:34.475 [2024-11-26 17:33:34.950458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72231 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72231 ']' 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72231 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72231 00:36:34.475 killing process with pid 72231 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72231' 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72231 00:36:34.475 [2024-11-26 17:33:34.997191] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:34.475 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72231 00:36:34.734 [2024-11-26 17:33:35.406989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:36.114 17:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:36:36.114 00:36:36.114 real 0m11.814s 00:36:36.114 user 0m18.574s 00:36:36.114 sys 0m2.124s 00:36:36.114 17:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.114 17:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.114 ************************************ 00:36:36.114 END TEST raid_state_function_test_sb 00:36:36.114 ************************************ 00:36:36.114 17:33:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:36:36.114 17:33:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:36.114 17:33:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:36.114 17:33:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:36.114 ************************************ 00:36:36.114 START TEST raid_superblock_test 00:36:36.114 ************************************ 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72901 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72901 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72901 ']' 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:36.114 17:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:36.114 [2024-11-26 17:33:36.797420] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:36.114 [2024-11-26 17:33:36.797685] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72901 ] 00:36:36.374 [2024-11-26 17:33:36.975550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:36.634 [2024-11-26 17:33:37.126919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.893 [2024-11-26 17:33:37.344059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:36.893 [2024-11-26 17:33:37.344230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.152 malloc1 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.152 [2024-11-26 17:33:37.737738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:37.152 [2024-11-26 17:33:37.737865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:37.152 [2024-11-26 17:33:37.737943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:37.152 [2024-11-26 17:33:37.737984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:37.152 [2024-11-26 17:33:37.740462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:37.152 [2024-11-26 17:33:37.740562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:37.152 pt1 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.152 malloc2 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.152 [2024-11-26 17:33:37.803262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:37.152 [2024-11-26 17:33:37.803334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:37.152 [2024-11-26 17:33:37.803364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:37.152 [2024-11-26 17:33:37.803375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:37.152 [2024-11-26 17:33:37.805848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:37.152 [2024-11-26 17:33:37.805892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:37.152 pt2 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.152 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.410 malloc3 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.410 [2024-11-26 17:33:37.874875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:37.410 [2024-11-26 17:33:37.875005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:37.410 [2024-11-26 17:33:37.875055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:37.410 [2024-11-26 17:33:37.875140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:37.410 [2024-11-26 17:33:37.877615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:37.410 [2024-11-26 17:33:37.877707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:37.410 pt3 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.410 malloc4 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.410 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.410 [2024-11-26 17:33:37.937855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:37.410 [2024-11-26 17:33:37.937971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:37.410 [2024-11-26 17:33:37.938018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:36:37.410 [2024-11-26 17:33:37.938063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:37.411 [2024-11-26 17:33:37.940411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:37.411 [2024-11-26 17:33:37.940507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:37.411 pt4 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.411 [2024-11-26 17:33:37.949860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:37.411 [2024-11-26 17:33:37.952102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:37.411 [2024-11-26 17:33:37.952210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:37.411 [2024-11-26 17:33:37.952270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:37.411 [2024-11-26 17:33:37.952473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:36:37.411 [2024-11-26 17:33:37.952486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:36:37.411 [2024-11-26 17:33:37.952826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:37.411 [2024-11-26 17:33:37.953137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:36:37.411 [2024-11-26 17:33:37.953162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:36:37.411 [2024-11-26 17:33:37.953340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.411 17:33:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.411 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:37.411 "name": "raid_bdev1", 00:36:37.411 "uuid": "fefd25be-5aa3-471e-8244-e360aa248720", 00:36:37.411 "strip_size_kb": 64, 00:36:37.411 "state": "online", 00:36:37.411 "raid_level": "concat", 00:36:37.411 "superblock": true, 00:36:37.411 "num_base_bdevs": 4, 00:36:37.411 "num_base_bdevs_discovered": 4, 00:36:37.411 "num_base_bdevs_operational": 4, 00:36:37.411 "base_bdevs_list": [ 00:36:37.411 { 00:36:37.411 "name": "pt1", 00:36:37.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:37.411 "is_configured": true, 00:36:37.411 "data_offset": 2048, 00:36:37.411 "data_size": 63488 00:36:37.411 }, 00:36:37.411 { 00:36:37.411 "name": "pt2", 00:36:37.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:37.411 "is_configured": true, 00:36:37.411 "data_offset": 2048, 00:36:37.411 "data_size": 63488 00:36:37.411 }, 00:36:37.411 { 00:36:37.411 "name": "pt3", 00:36:37.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:37.411 "is_configured": true, 00:36:37.411 "data_offset": 2048, 00:36:37.411 "data_size": 63488 00:36:37.411 }, 00:36:37.411 { 00:36:37.411 "name": "pt4", 00:36:37.411 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:37.411 "is_configured": true, 00:36:37.411 "data_offset": 2048, 00:36:37.411 "data_size": 63488 00:36:37.411 } 00:36:37.411 ] 00:36:37.411 }' 00:36:37.411 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:37.411 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.978 [2024-11-26 17:33:38.469348] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.978 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:37.978 "name": "raid_bdev1", 00:36:37.978 "aliases": [ 00:36:37.978 "fefd25be-5aa3-471e-8244-e360aa248720" 00:36:37.978 ], 00:36:37.978 "product_name": "Raid Volume", 00:36:37.978 "block_size": 512, 00:36:37.978 "num_blocks": 253952, 00:36:37.978 "uuid": "fefd25be-5aa3-471e-8244-e360aa248720", 00:36:37.978 "assigned_rate_limits": { 00:36:37.979 "rw_ios_per_sec": 0, 00:36:37.979 "rw_mbytes_per_sec": 0, 00:36:37.979 "r_mbytes_per_sec": 0, 00:36:37.979 "w_mbytes_per_sec": 0 00:36:37.979 }, 00:36:37.979 "claimed": false, 00:36:37.979 "zoned": false, 00:36:37.979 "supported_io_types": { 00:36:37.979 "read": true, 00:36:37.979 "write": true, 00:36:37.979 "unmap": true, 00:36:37.979 "flush": true, 00:36:37.979 "reset": true, 00:36:37.979 "nvme_admin": false, 00:36:37.979 "nvme_io": false, 00:36:37.979 "nvme_io_md": false, 00:36:37.979 "write_zeroes": true, 00:36:37.979 "zcopy": false, 00:36:37.979 "get_zone_info": false, 00:36:37.979 "zone_management": false, 00:36:37.979 "zone_append": false, 00:36:37.979 "compare": false, 00:36:37.979 "compare_and_write": false, 00:36:37.979 "abort": false, 00:36:37.979 "seek_hole": false, 00:36:37.979 "seek_data": false, 00:36:37.979 "copy": false, 00:36:37.979 "nvme_iov_md": false 00:36:37.979 }, 00:36:37.979 "memory_domains": [ 00:36:37.979 { 00:36:37.979 "dma_device_id": "system", 00:36:37.979 "dma_device_type": 1 00:36:37.979 }, 00:36:37.979 { 00:36:37.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:37.979 "dma_device_type": 2 00:36:37.979 }, 00:36:37.979 { 00:36:37.979 "dma_device_id": "system", 00:36:37.979 "dma_device_type": 1 00:36:37.979 }, 00:36:37.979 { 00:36:37.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:37.979 "dma_device_type": 2 00:36:37.979 }, 00:36:37.979 { 00:36:37.979 "dma_device_id": "system", 00:36:37.979 "dma_device_type": 1 00:36:37.979 }, 00:36:37.979 { 00:36:37.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:37.979 "dma_device_type": 2 00:36:37.979 }, 00:36:37.979 { 00:36:37.979 "dma_device_id": "system", 00:36:37.979 "dma_device_type": 1 00:36:37.979 }, 00:36:37.979 { 00:36:37.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:37.979 "dma_device_type": 2 00:36:37.979 } 00:36:37.979 ], 00:36:37.979 "driver_specific": { 00:36:37.979 "raid": { 00:36:37.979 "uuid": "fefd25be-5aa3-471e-8244-e360aa248720", 00:36:37.979 "strip_size_kb": 64, 00:36:37.979 "state": "online", 00:36:37.979 "raid_level": "concat", 00:36:37.979 "superblock": true, 00:36:37.979 "num_base_bdevs": 4, 00:36:37.979 "num_base_bdevs_discovered": 4, 00:36:37.979 "num_base_bdevs_operational": 4, 00:36:37.979 "base_bdevs_list": [ 00:36:37.979 { 00:36:37.979 "name": "pt1", 00:36:37.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:37.979 "is_configured": true, 00:36:37.979 "data_offset": 2048, 00:36:37.979 "data_size": 63488 00:36:37.979 }, 00:36:37.979 { 00:36:37.979 "name": "pt2", 00:36:37.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:37.979 "is_configured": true, 00:36:37.979 "data_offset": 2048, 00:36:37.979 "data_size": 63488 00:36:37.979 }, 00:36:37.979 { 00:36:37.979 "name": "pt3", 00:36:37.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:37.979 "is_configured": true, 00:36:37.979 "data_offset": 2048, 00:36:37.979 "data_size": 63488 00:36:37.979 }, 00:36:37.979 { 00:36:37.979 "name": "pt4", 00:36:37.979 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:37.979 "is_configured": true, 00:36:37.979 "data_offset": 2048, 00:36:37.979 "data_size": 63488 00:36:37.979 } 00:36:37.979 ] 00:36:37.979 } 00:36:37.979 } 00:36:37.979 }' 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:37.979 pt2 00:36:37.979 pt3 00:36:37.979 pt4' 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:37.979 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:38.238 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.239 [2024-11-26 17:33:38.796821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fefd25be-5aa3-471e-8244-e360aa248720 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fefd25be-5aa3-471e-8244-e360aa248720 ']' 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.239 [2024-11-26 17:33:38.840374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:38.239 [2024-11-26 17:33:38.840453] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:38.239 [2024-11-26 17:33:38.840588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:38.239 [2024-11-26 17:33:38.840714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:38.239 [2024-11-26 17:33:38.840775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.239 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:36:38.499 17:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.499 [2024-11-26 17:33:39.012214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:38.499 [2024-11-26 17:33:39.014384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:38.499 [2024-11-26 17:33:39.014442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:36:38.499 [2024-11-26 17:33:39.014480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:36:38.499 [2024-11-26 17:33:39.014547] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:38.499 [2024-11-26 17:33:39.014608] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:38.499 [2024-11-26 17:33:39.014630] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:36:38.499 [2024-11-26 17:33:39.014651] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:36:38.499 [2024-11-26 17:33:39.014683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:38.499 [2024-11-26 17:33:39.014709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:36:38.499 request: 00:36:38.499 { 00:36:38.499 "name": "raid_bdev1", 00:36:38.499 "raid_level": "concat", 00:36:38.499 "base_bdevs": [ 00:36:38.499 "malloc1", 00:36:38.499 "malloc2", 00:36:38.499 "malloc3", 00:36:38.499 "malloc4" 00:36:38.499 ], 00:36:38.499 "strip_size_kb": 64, 00:36:38.499 "superblock": false, 00:36:38.499 "method": "bdev_raid_create", 00:36:38.499 "req_id": 1 00:36:38.499 } 00:36:38.499 Got JSON-RPC error response 00:36:38.499 response: 00:36:38.499 { 00:36:38.499 "code": -17, 00:36:38.499 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:38.499 } 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.499 [2024-11-26 17:33:39.076053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:38.499 [2024-11-26 17:33:39.076230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:38.499 [2024-11-26 17:33:39.076292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:38.499 [2024-11-26 17:33:39.076336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:38.499 [2024-11-26 17:33:39.079001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:38.499 [2024-11-26 17:33:39.079122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:38.499 [2024-11-26 17:33:39.079272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:38.499 [2024-11-26 17:33:39.079385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:38.499 pt1 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:38.499 "name": "raid_bdev1", 00:36:38.499 "uuid": "fefd25be-5aa3-471e-8244-e360aa248720", 00:36:38.499 "strip_size_kb": 64, 00:36:38.499 "state": "configuring", 00:36:38.499 "raid_level": "concat", 00:36:38.499 "superblock": true, 00:36:38.499 "num_base_bdevs": 4, 00:36:38.499 "num_base_bdevs_discovered": 1, 00:36:38.499 "num_base_bdevs_operational": 4, 00:36:38.499 "base_bdevs_list": [ 00:36:38.499 { 00:36:38.499 "name": "pt1", 00:36:38.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:38.499 "is_configured": true, 00:36:38.499 "data_offset": 2048, 00:36:38.499 "data_size": 63488 00:36:38.499 }, 00:36:38.499 { 00:36:38.499 "name": null, 00:36:38.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:38.499 "is_configured": false, 00:36:38.499 "data_offset": 2048, 00:36:38.499 "data_size": 63488 00:36:38.499 }, 00:36:38.499 { 00:36:38.499 "name": null, 00:36:38.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:38.499 "is_configured": false, 00:36:38.499 "data_offset": 2048, 00:36:38.499 "data_size": 63488 00:36:38.499 }, 00:36:38.499 { 00:36:38.499 "name": null, 00:36:38.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:38.499 "is_configured": false, 00:36:38.499 "data_offset": 2048, 00:36:38.499 "data_size": 63488 00:36:38.499 } 00:36:38.499 ] 00:36:38.499 }' 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:38.499 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.064 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:36:39.064 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:39.064 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.064 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.064 [2024-11-26 17:33:39.591178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:39.064 [2024-11-26 17:33:39.591333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:39.064 [2024-11-26 17:33:39.591361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:36:39.064 [2024-11-26 17:33:39.591376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:39.064 [2024-11-26 17:33:39.591889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:39.065 [2024-11-26 17:33:39.591922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:39.065 [2024-11-26 17:33:39.592014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:39.065 [2024-11-26 17:33:39.592043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:39.065 pt2 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.065 [2024-11-26 17:33:39.599166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:39.065 "name": "raid_bdev1", 00:36:39.065 "uuid": "fefd25be-5aa3-471e-8244-e360aa248720", 00:36:39.065 "strip_size_kb": 64, 00:36:39.065 "state": "configuring", 00:36:39.065 "raid_level": "concat", 00:36:39.065 "superblock": true, 00:36:39.065 "num_base_bdevs": 4, 00:36:39.065 "num_base_bdevs_discovered": 1, 00:36:39.065 "num_base_bdevs_operational": 4, 00:36:39.065 "base_bdevs_list": [ 00:36:39.065 { 00:36:39.065 "name": "pt1", 00:36:39.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:39.065 "is_configured": true, 00:36:39.065 "data_offset": 2048, 00:36:39.065 "data_size": 63488 00:36:39.065 }, 00:36:39.065 { 00:36:39.065 "name": null, 00:36:39.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:39.065 "is_configured": false, 00:36:39.065 "data_offset": 0, 00:36:39.065 "data_size": 63488 00:36:39.065 }, 00:36:39.065 { 00:36:39.065 "name": null, 00:36:39.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:39.065 "is_configured": false, 00:36:39.065 "data_offset": 2048, 00:36:39.065 "data_size": 63488 00:36:39.065 }, 00:36:39.065 { 00:36:39.065 "name": null, 00:36:39.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:39.065 "is_configured": false, 00:36:39.065 "data_offset": 2048, 00:36:39.065 "data_size": 63488 00:36:39.065 } 00:36:39.065 ] 00:36:39.065 }' 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:39.065 17:33:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.634 [2024-11-26 17:33:40.042429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:39.634 [2024-11-26 17:33:40.042567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:39.634 [2024-11-26 17:33:40.042612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:36:39.634 [2024-11-26 17:33:40.042647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:39.634 [2024-11-26 17:33:40.043206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:39.634 [2024-11-26 17:33:40.043267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:39.634 [2024-11-26 17:33:40.043389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:39.634 [2024-11-26 17:33:40.043446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:39.634 pt2 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.634 [2024-11-26 17:33:40.054378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:39.634 [2024-11-26 17:33:40.054487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:39.634 [2024-11-26 17:33:40.054535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:36:39.634 [2024-11-26 17:33:40.054589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:39.634 [2024-11-26 17:33:40.055071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:39.634 [2024-11-26 17:33:40.055147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:39.634 [2024-11-26 17:33:40.055286] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:36:39.634 [2024-11-26 17:33:40.055354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:39.634 pt3 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.634 [2024-11-26 17:33:40.066325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:36:39.634 [2024-11-26 17:33:40.066418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:39.634 [2024-11-26 17:33:40.066458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:36:39.634 [2024-11-26 17:33:40.066479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:39.634 [2024-11-26 17:33:40.066961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:39.634 [2024-11-26 17:33:40.066980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:36:39.634 [2024-11-26 17:33:40.067053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:36:39.634 [2024-11-26 17:33:40.067076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:36:39.634 [2024-11-26 17:33:40.067219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:39.634 [2024-11-26 17:33:40.067229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:36:39.634 [2024-11-26 17:33:40.067482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:39.634 [2024-11-26 17:33:40.067670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:39.634 [2024-11-26 17:33:40.067708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:36:39.634 [2024-11-26 17:33:40.067870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:39.634 pt4 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.634 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:39.634 "name": "raid_bdev1", 00:36:39.634 "uuid": "fefd25be-5aa3-471e-8244-e360aa248720", 00:36:39.634 "strip_size_kb": 64, 00:36:39.634 "state": "online", 00:36:39.634 "raid_level": "concat", 00:36:39.634 "superblock": true, 00:36:39.634 "num_base_bdevs": 4, 00:36:39.634 "num_base_bdevs_discovered": 4, 00:36:39.634 "num_base_bdevs_operational": 4, 00:36:39.634 "base_bdevs_list": [ 00:36:39.634 { 00:36:39.634 "name": "pt1", 00:36:39.635 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:39.635 "is_configured": true, 00:36:39.635 "data_offset": 2048, 00:36:39.635 "data_size": 63488 00:36:39.635 }, 00:36:39.635 { 00:36:39.635 "name": "pt2", 00:36:39.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:39.635 "is_configured": true, 00:36:39.635 "data_offset": 2048, 00:36:39.635 "data_size": 63488 00:36:39.635 }, 00:36:39.635 { 00:36:39.635 "name": "pt3", 00:36:39.635 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:39.635 "is_configured": true, 00:36:39.635 "data_offset": 2048, 00:36:39.635 "data_size": 63488 00:36:39.635 }, 00:36:39.635 { 00:36:39.635 "name": "pt4", 00:36:39.635 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:39.635 "is_configured": true, 00:36:39.635 "data_offset": 2048, 00:36:39.635 "data_size": 63488 00:36:39.635 } 00:36:39.635 ] 00:36:39.635 }' 00:36:39.635 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:39.635 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:39.907 [2024-11-26 17:33:40.497998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:39.907 "name": "raid_bdev1", 00:36:39.907 "aliases": [ 00:36:39.907 "fefd25be-5aa3-471e-8244-e360aa248720" 00:36:39.907 ], 00:36:39.907 "product_name": "Raid Volume", 00:36:39.907 "block_size": 512, 00:36:39.907 "num_blocks": 253952, 00:36:39.907 "uuid": "fefd25be-5aa3-471e-8244-e360aa248720", 00:36:39.907 "assigned_rate_limits": { 00:36:39.907 "rw_ios_per_sec": 0, 00:36:39.907 "rw_mbytes_per_sec": 0, 00:36:39.907 "r_mbytes_per_sec": 0, 00:36:39.907 "w_mbytes_per_sec": 0 00:36:39.907 }, 00:36:39.907 "claimed": false, 00:36:39.907 "zoned": false, 00:36:39.907 "supported_io_types": { 00:36:39.907 "read": true, 00:36:39.907 "write": true, 00:36:39.907 "unmap": true, 00:36:39.907 "flush": true, 00:36:39.907 "reset": true, 00:36:39.907 "nvme_admin": false, 00:36:39.907 "nvme_io": false, 00:36:39.907 "nvme_io_md": false, 00:36:39.907 "write_zeroes": true, 00:36:39.907 "zcopy": false, 00:36:39.907 "get_zone_info": false, 00:36:39.907 "zone_management": false, 00:36:39.907 "zone_append": false, 00:36:39.907 "compare": false, 00:36:39.907 "compare_and_write": false, 00:36:39.907 "abort": false, 00:36:39.907 "seek_hole": false, 00:36:39.907 "seek_data": false, 00:36:39.907 "copy": false, 00:36:39.907 "nvme_iov_md": false 00:36:39.907 }, 00:36:39.907 "memory_domains": [ 00:36:39.907 { 00:36:39.907 "dma_device_id": "system", 00:36:39.907 "dma_device_type": 1 00:36:39.907 }, 00:36:39.907 { 00:36:39.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:39.907 "dma_device_type": 2 00:36:39.907 }, 00:36:39.907 { 00:36:39.907 "dma_device_id": "system", 00:36:39.907 "dma_device_type": 1 00:36:39.907 }, 00:36:39.907 { 00:36:39.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:39.907 "dma_device_type": 2 00:36:39.907 }, 00:36:39.907 { 00:36:39.907 "dma_device_id": "system", 00:36:39.907 "dma_device_type": 1 00:36:39.907 }, 00:36:39.907 { 00:36:39.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:39.907 "dma_device_type": 2 00:36:39.907 }, 00:36:39.907 { 00:36:39.907 "dma_device_id": "system", 00:36:39.907 "dma_device_type": 1 00:36:39.907 }, 00:36:39.907 { 00:36:39.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:39.907 "dma_device_type": 2 00:36:39.907 } 00:36:39.907 ], 00:36:39.907 "driver_specific": { 00:36:39.907 "raid": { 00:36:39.907 "uuid": "fefd25be-5aa3-471e-8244-e360aa248720", 00:36:39.907 "strip_size_kb": 64, 00:36:39.907 "state": "online", 00:36:39.907 "raid_level": "concat", 00:36:39.907 "superblock": true, 00:36:39.907 "num_base_bdevs": 4, 00:36:39.907 "num_base_bdevs_discovered": 4, 00:36:39.907 "num_base_bdevs_operational": 4, 00:36:39.907 "base_bdevs_list": [ 00:36:39.907 { 00:36:39.907 "name": "pt1", 00:36:39.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:39.907 "is_configured": true, 00:36:39.907 "data_offset": 2048, 00:36:39.907 "data_size": 63488 00:36:39.907 }, 00:36:39.907 { 00:36:39.907 "name": "pt2", 00:36:39.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:39.907 "is_configured": true, 00:36:39.907 "data_offset": 2048, 00:36:39.907 "data_size": 63488 00:36:39.907 }, 00:36:39.907 { 00:36:39.907 "name": "pt3", 00:36:39.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:39.907 "is_configured": true, 00:36:39.907 "data_offset": 2048, 00:36:39.907 "data_size": 63488 00:36:39.907 }, 00:36:39.907 { 00:36:39.907 "name": "pt4", 00:36:39.907 "uuid": "00000000-0000-0000-0000-000000000004", 00:36:39.907 "is_configured": true, 00:36:39.907 "data_offset": 2048, 00:36:39.907 "data_size": 63488 00:36:39.907 } 00:36:39.907 ] 00:36:39.907 } 00:36:39.907 } 00:36:39.907 }' 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:39.907 pt2 00:36:39.907 pt3 00:36:39.907 pt4' 00:36:39.907 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:40.239 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:40.240 [2024-11-26 17:33:40.857390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fefd25be-5aa3-471e-8244-e360aa248720 '!=' fefd25be-5aa3-471e-8244-e360aa248720 ']' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72901 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72901 ']' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72901 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.240 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72901 00:36:40.497 killing process with pid 72901 00:36:40.498 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:40.498 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:40.498 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72901' 00:36:40.498 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72901 00:36:40.498 [2024-11-26 17:33:40.941179] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:40.498 [2024-11-26 17:33:40.941272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:40.498 17:33:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72901 00:36:40.498 [2024-11-26 17:33:40.941354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:40.498 [2024-11-26 17:33:40.941366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:36:40.756 [2024-11-26 17:33:41.375154] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:42.130 17:33:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:36:42.130 00:36:42.130 real 0m5.872s 00:36:42.130 user 0m8.406s 00:36:42.130 sys 0m1.049s 00:36:42.130 17:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:42.130 17:33:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.130 ************************************ 00:36:42.130 END TEST raid_superblock_test 00:36:42.130 ************************************ 00:36:42.131 17:33:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:36:42.131 17:33:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:42.131 17:33:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:42.131 17:33:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:42.131 ************************************ 00:36:42.131 START TEST raid_read_error_test 00:36:42.131 ************************************ 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2auxwLIkAe 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73170 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73170 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73170 ']' 00:36:42.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:42.131 17:33:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:42.131 [2024-11-26 17:33:42.757666] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:42.131 [2024-11-26 17:33:42.757790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73170 ] 00:36:42.389 [2024-11-26 17:33:42.914719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:42.389 [2024-11-26 17:33:43.033028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.647 [2024-11-26 17:33:43.244434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:42.647 [2024-11-26 17:33:43.244579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.213 BaseBdev1_malloc 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.213 true 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.213 [2024-11-26 17:33:43.735211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:36:43.213 [2024-11-26 17:33:43.735279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:43.213 [2024-11-26 17:33:43.735302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:43.213 [2024-11-26 17:33:43.735314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:43.213 [2024-11-26 17:33:43.737739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:43.213 [2024-11-26 17:33:43.737798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:43.213 BaseBdev1 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.213 BaseBdev2_malloc 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.213 true 00:36:43.213 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.214 [2024-11-26 17:33:43.807881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:36:43.214 [2024-11-26 17:33:43.807936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:43.214 [2024-11-26 17:33:43.807954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:43.214 [2024-11-26 17:33:43.807964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:43.214 [2024-11-26 17:33:43.810129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:43.214 [2024-11-26 17:33:43.810170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:43.214 BaseBdev2 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.214 BaseBdev3_malloc 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.214 true 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.214 [2024-11-26 17:33:43.884010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:36:43.214 [2024-11-26 17:33:43.884075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:43.214 [2024-11-26 17:33:43.884107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:43.214 [2024-11-26 17:33:43.884135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:43.214 [2024-11-26 17:33:43.886324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:43.214 [2024-11-26 17:33:43.886366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:43.214 BaseBdev3 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.214 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.472 BaseBdev4_malloc 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.472 true 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.472 [2024-11-26 17:33:43.950329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:36:43.472 [2024-11-26 17:33:43.950388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:43.472 [2024-11-26 17:33:43.950406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:43.472 [2024-11-26 17:33:43.950417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:43.472 [2024-11-26 17:33:43.952534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:43.472 [2024-11-26 17:33:43.952622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:36:43.472 BaseBdev4 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.472 [2024-11-26 17:33:43.962365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:43.472 [2024-11-26 17:33:43.964178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:43.472 [2024-11-26 17:33:43.964302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:43.472 [2024-11-26 17:33:43.964369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:43.472 [2024-11-26 17:33:43.964600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:36:43.472 [2024-11-26 17:33:43.964616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:36:43.472 [2024-11-26 17:33:43.964857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:36:43.472 [2024-11-26 17:33:43.965012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:36:43.472 [2024-11-26 17:33:43.965023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:36:43.472 [2024-11-26 17:33:43.965167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.472 17:33:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.472 17:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:43.472 "name": "raid_bdev1", 00:36:43.472 "uuid": "0c0ebeea-69c2-4d95-af9b-b74741657f98", 00:36:43.472 "strip_size_kb": 64, 00:36:43.472 "state": "online", 00:36:43.472 "raid_level": "concat", 00:36:43.472 "superblock": true, 00:36:43.472 "num_base_bdevs": 4, 00:36:43.472 "num_base_bdevs_discovered": 4, 00:36:43.472 "num_base_bdevs_operational": 4, 00:36:43.472 "base_bdevs_list": [ 00:36:43.472 { 00:36:43.472 "name": "BaseBdev1", 00:36:43.472 "uuid": "11ff0607-db2d-54e9-98bc-7c27da16b507", 00:36:43.472 "is_configured": true, 00:36:43.472 "data_offset": 2048, 00:36:43.472 "data_size": 63488 00:36:43.472 }, 00:36:43.472 { 00:36:43.472 "name": "BaseBdev2", 00:36:43.472 "uuid": "7a26ed41-79eb-55cc-a339-a8a10afba855", 00:36:43.472 "is_configured": true, 00:36:43.472 "data_offset": 2048, 00:36:43.472 "data_size": 63488 00:36:43.472 }, 00:36:43.472 { 00:36:43.472 "name": "BaseBdev3", 00:36:43.472 "uuid": "ec23f4d7-eb34-5ae8-a28f-6a8556971f93", 00:36:43.472 "is_configured": true, 00:36:43.472 "data_offset": 2048, 00:36:43.472 "data_size": 63488 00:36:43.472 }, 00:36:43.472 { 00:36:43.472 "name": "BaseBdev4", 00:36:43.472 "uuid": "1d66368c-774d-596f-9692-8beb7ac567db", 00:36:43.472 "is_configured": true, 00:36:43.472 "data_offset": 2048, 00:36:43.472 "data_size": 63488 00:36:43.472 } 00:36:43.472 ] 00:36:43.472 }' 00:36:43.472 17:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:43.472 17:33:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:43.731 17:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:36:43.731 17:33:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:43.988 [2024-11-26 17:33:44.466873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:44.922 "name": "raid_bdev1", 00:36:44.922 "uuid": "0c0ebeea-69c2-4d95-af9b-b74741657f98", 00:36:44.922 "strip_size_kb": 64, 00:36:44.922 "state": "online", 00:36:44.922 "raid_level": "concat", 00:36:44.922 "superblock": true, 00:36:44.922 "num_base_bdevs": 4, 00:36:44.922 "num_base_bdevs_discovered": 4, 00:36:44.922 "num_base_bdevs_operational": 4, 00:36:44.922 "base_bdevs_list": [ 00:36:44.922 { 00:36:44.922 "name": "BaseBdev1", 00:36:44.922 "uuid": "11ff0607-db2d-54e9-98bc-7c27da16b507", 00:36:44.922 "is_configured": true, 00:36:44.922 "data_offset": 2048, 00:36:44.922 "data_size": 63488 00:36:44.922 }, 00:36:44.922 { 00:36:44.922 "name": "BaseBdev2", 00:36:44.922 "uuid": "7a26ed41-79eb-55cc-a339-a8a10afba855", 00:36:44.922 "is_configured": true, 00:36:44.922 "data_offset": 2048, 00:36:44.922 "data_size": 63488 00:36:44.922 }, 00:36:44.922 { 00:36:44.922 "name": "BaseBdev3", 00:36:44.922 "uuid": "ec23f4d7-eb34-5ae8-a28f-6a8556971f93", 00:36:44.922 "is_configured": true, 00:36:44.922 "data_offset": 2048, 00:36:44.922 "data_size": 63488 00:36:44.922 }, 00:36:44.922 { 00:36:44.922 "name": "BaseBdev4", 00:36:44.922 "uuid": "1d66368c-774d-596f-9692-8beb7ac567db", 00:36:44.922 "is_configured": true, 00:36:44.922 "data_offset": 2048, 00:36:44.922 "data_size": 63488 00:36:44.922 } 00:36:44.922 ] 00:36:44.922 }' 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:44.922 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.181 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:45.181 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.181 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:45.181 [2024-11-26 17:33:45.856172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:45.181 [2024-11-26 17:33:45.856212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:45.181 [2024-11-26 17:33:45.859432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:45.181 [2024-11-26 17:33:45.859496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:45.181 [2024-11-26 17:33:45.859559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:45.181 [2024-11-26 17:33:45.859592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:36:45.181 { 00:36:45.181 "results": [ 00:36:45.181 { 00:36:45.181 "job": "raid_bdev1", 00:36:45.181 "core_mask": "0x1", 00:36:45.181 "workload": "randrw", 00:36:45.181 "percentage": 50, 00:36:45.181 "status": "finished", 00:36:45.181 "queue_depth": 1, 00:36:45.181 "io_size": 131072, 00:36:45.181 "runtime": 1.390099, 00:36:45.181 "iops": 14357.97018773483, 00:36:45.181 "mibps": 1794.7462734668538, 00:36:45.181 "io_failed": 1, 00:36:45.181 "io_timeout": 0, 00:36:45.182 "avg_latency_us": 96.32076449842917, 00:36:45.182 "min_latency_us": 27.276855895196505, 00:36:45.182 "max_latency_us": 1695.6366812227075 00:36:45.182 } 00:36:45.182 ], 00:36:45.182 "core_count": 1 00:36:45.182 } 00:36:45.182 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.182 17:33:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73170 00:36:45.182 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73170 ']' 00:36:45.182 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73170 00:36:45.182 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:36:45.182 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:45.182 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73170 00:36:45.440 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:45.440 killing process with pid 73170 00:36:45.440 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:45.440 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73170' 00:36:45.440 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73170 00:36:45.440 [2024-11-26 17:33:45.905525] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:45.440 17:33:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73170 00:36:45.698 [2024-11-26 17:33:46.290581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:47.077 17:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:36:47.077 17:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2auxwLIkAe 00:36:47.077 17:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:36:47.077 17:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:36:47.077 17:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:36:47.077 17:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:47.077 17:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:47.078 ************************************ 00:36:47.078 END TEST raid_read_error_test 00:36:47.078 ************************************ 00:36:47.078 17:33:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:36:47.078 00:36:47.078 real 0m5.034s 00:36:47.078 user 0m5.955s 00:36:47.078 sys 0m0.544s 00:36:47.078 17:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.078 17:33:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:47.078 17:33:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:36:47.078 17:33:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:47.078 17:33:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:47.078 17:33:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:47.078 ************************************ 00:36:47.078 START TEST raid_write_error_test 00:36:47.078 ************************************ 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WHhStAXNDS 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73317 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73317 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73317 ']' 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:47.078 17:33:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:47.337 [2024-11-26 17:33:47.865874] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:47.337 [2024-11-26 17:33:47.866041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73317 ] 00:36:47.594 [2024-11-26 17:33:48.074123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.594 [2024-11-26 17:33:48.240554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.852 [2024-11-26 17:33:48.481742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:47.852 [2024-11-26 17:33:48.481798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:48.110 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:48.110 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:36:48.111 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:48.111 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:48.111 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.111 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.369 BaseBdev1_malloc 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.369 true 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.369 [2024-11-26 17:33:48.834261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:36:48.369 [2024-11-26 17:33:48.834327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:48.369 [2024-11-26 17:33:48.834351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:48.369 [2024-11-26 17:33:48.834364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:48.369 [2024-11-26 17:33:48.836829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:48.369 [2024-11-26 17:33:48.836878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:48.369 BaseBdev1 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.369 BaseBdev2_malloc 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.369 true 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.369 [2024-11-26 17:33:48.907502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:36:48.369 [2024-11-26 17:33:48.907607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:48.369 [2024-11-26 17:33:48.907625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:36:48.369 [2024-11-26 17:33:48.907638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:48.369 [2024-11-26 17:33:48.910055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:48.369 [2024-11-26 17:33:48.910100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:48.369 BaseBdev2 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.369 BaseBdev3_malloc 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.369 true 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.369 17:33:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.369 [2024-11-26 17:33:48.996863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:36:48.369 [2024-11-26 17:33:48.996922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:48.369 [2024-11-26 17:33:48.996942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:48.369 [2024-11-26 17:33:48.996955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:48.369 [2024-11-26 17:33:48.999316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:48.369 [2024-11-26 17:33:48.999376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:48.369 BaseBdev3 00:36:48.369 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.369 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:36:48.369 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:36:48.369 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.369 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.369 BaseBdev4_malloc 00:36:48.369 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.369 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:36:48.369 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.369 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.628 true 00:36:48.628 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.628 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:36:48.628 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.628 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.628 [2024-11-26 17:33:49.070807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:36:48.628 [2024-11-26 17:33:49.070939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:48.628 [2024-11-26 17:33:49.070969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:48.628 [2024-11-26 17:33:49.070983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:48.628 [2024-11-26 17:33:49.073441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:48.628 [2024-11-26 17:33:49.073485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:36:48.628 BaseBdev4 00:36:48.628 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.628 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:36:48.628 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.628 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.628 [2024-11-26 17:33:49.082916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:48.628 [2024-11-26 17:33:49.084968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:48.628 [2024-11-26 17:33:49.085056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:48.628 [2024-11-26 17:33:49.085129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:48.628 [2024-11-26 17:33:49.085388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:36:48.628 [2024-11-26 17:33:49.085406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:36:48.628 [2024-11-26 17:33:49.085711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:36:48.628 [2024-11-26 17:33:49.085885] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:36:48.628 [2024-11-26 17:33:49.085897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:36:48.628 [2024-11-26 17:33:49.086046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:48.629 "name": "raid_bdev1", 00:36:48.629 "uuid": "639d318d-2004-4b0e-81ab-8d7871f9841b", 00:36:48.629 "strip_size_kb": 64, 00:36:48.629 "state": "online", 00:36:48.629 "raid_level": "concat", 00:36:48.629 "superblock": true, 00:36:48.629 "num_base_bdevs": 4, 00:36:48.629 "num_base_bdevs_discovered": 4, 00:36:48.629 "num_base_bdevs_operational": 4, 00:36:48.629 "base_bdevs_list": [ 00:36:48.629 { 00:36:48.629 "name": "BaseBdev1", 00:36:48.629 "uuid": "9754978d-5f43-59ce-b5d5-50aa5d296bc4", 00:36:48.629 "is_configured": true, 00:36:48.629 "data_offset": 2048, 00:36:48.629 "data_size": 63488 00:36:48.629 }, 00:36:48.629 { 00:36:48.629 "name": "BaseBdev2", 00:36:48.629 "uuid": "614b24de-9b53-518c-bf63-ec561c4a24b4", 00:36:48.629 "is_configured": true, 00:36:48.629 "data_offset": 2048, 00:36:48.629 "data_size": 63488 00:36:48.629 }, 00:36:48.629 { 00:36:48.629 "name": "BaseBdev3", 00:36:48.629 "uuid": "afeb32df-055f-5732-9852-981eb9b34ac4", 00:36:48.629 "is_configured": true, 00:36:48.629 "data_offset": 2048, 00:36:48.629 "data_size": 63488 00:36:48.629 }, 00:36:48.629 { 00:36:48.629 "name": "BaseBdev4", 00:36:48.629 "uuid": "051f26d4-a053-5021-ae6d-ce657b3e1bac", 00:36:48.629 "is_configured": true, 00:36:48.629 "data_offset": 2048, 00:36:48.629 "data_size": 63488 00:36:48.629 } 00:36:48.629 ] 00:36:48.629 }' 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:48.629 17:33:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:48.888 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:36:48.888 17:33:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:49.147 [2024-11-26 17:33:49.666228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:50.081 "name": "raid_bdev1", 00:36:50.081 "uuid": "639d318d-2004-4b0e-81ab-8d7871f9841b", 00:36:50.081 "strip_size_kb": 64, 00:36:50.081 "state": "online", 00:36:50.081 "raid_level": "concat", 00:36:50.081 "superblock": true, 00:36:50.081 "num_base_bdevs": 4, 00:36:50.081 "num_base_bdevs_discovered": 4, 00:36:50.081 "num_base_bdevs_operational": 4, 00:36:50.081 "base_bdevs_list": [ 00:36:50.081 { 00:36:50.081 "name": "BaseBdev1", 00:36:50.081 "uuid": "9754978d-5f43-59ce-b5d5-50aa5d296bc4", 00:36:50.081 "is_configured": true, 00:36:50.081 "data_offset": 2048, 00:36:50.081 "data_size": 63488 00:36:50.081 }, 00:36:50.081 { 00:36:50.081 "name": "BaseBdev2", 00:36:50.081 "uuid": "614b24de-9b53-518c-bf63-ec561c4a24b4", 00:36:50.081 "is_configured": true, 00:36:50.081 "data_offset": 2048, 00:36:50.081 "data_size": 63488 00:36:50.081 }, 00:36:50.081 { 00:36:50.081 "name": "BaseBdev3", 00:36:50.081 "uuid": "afeb32df-055f-5732-9852-981eb9b34ac4", 00:36:50.081 "is_configured": true, 00:36:50.081 "data_offset": 2048, 00:36:50.081 "data_size": 63488 00:36:50.081 }, 00:36:50.081 { 00:36:50.081 "name": "BaseBdev4", 00:36:50.081 "uuid": "051f26d4-a053-5021-ae6d-ce657b3e1bac", 00:36:50.081 "is_configured": true, 00:36:50.081 "data_offset": 2048, 00:36:50.081 "data_size": 63488 00:36:50.081 } 00:36:50.081 ] 00:36:50.081 }' 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:50.081 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:50.340 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:50.340 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.340 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:50.340 [2024-11-26 17:33:50.984390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:50.340 [2024-11-26 17:33:50.984429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:50.340 [2024-11-26 17:33:50.987635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:50.340 [2024-11-26 17:33:50.987700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:50.340 [2024-11-26 17:33:50.987748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:50.340 [2024-11-26 17:33:50.987764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:36:50.340 { 00:36:50.340 "results": [ 00:36:50.340 { 00:36:50.340 "job": "raid_bdev1", 00:36:50.340 "core_mask": "0x1", 00:36:50.340 "workload": "randrw", 00:36:50.340 "percentage": 50, 00:36:50.340 "status": "finished", 00:36:50.340 "queue_depth": 1, 00:36:50.340 "io_size": 131072, 00:36:50.340 "runtime": 1.312772, 00:36:50.340 "iops": 14190.582980136687, 00:36:50.340 "mibps": 1773.822872517086, 00:36:50.340 "io_failed": 1, 00:36:50.340 "io_timeout": 0, 00:36:50.340 "avg_latency_us": 97.47491105976802, 00:36:50.340 "min_latency_us": 27.269565217391303, 00:36:50.340 "max_latency_us": 1410.448695652174 00:36:50.340 } 00:36:50.340 ], 00:36:50.340 "core_count": 1 00:36:50.340 } 00:36:50.340 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.340 17:33:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73317 00:36:50.340 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73317 ']' 00:36:50.340 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73317 00:36:50.340 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:36:50.340 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:50.340 17:33:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73317 00:36:50.340 17:33:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:50.340 killing process with pid 73317 00:36:50.340 17:33:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:50.340 17:33:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73317' 00:36:50.340 17:33:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73317 00:36:50.340 [2024-11-26 17:33:51.033488] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:50.340 17:33:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73317 00:36:50.943 [2024-11-26 17:33:51.362163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:51.882 17:33:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WHhStAXNDS 00:36:51.882 17:33:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:36:51.882 17:33:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:36:51.882 17:33:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:36:51.882 17:33:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:36:51.882 17:33:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:51.882 17:33:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:36:51.882 ************************************ 00:36:51.882 END TEST raid_write_error_test 00:36:51.882 ************************************ 00:36:51.882 17:33:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:36:51.882 00:36:51.882 real 0m4.825s 00:36:51.882 user 0m5.677s 00:36:51.882 sys 0m0.653s 00:36:51.882 17:33:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.882 17:33:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:36:52.142 17:33:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:36:52.142 17:33:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:36:52.142 17:33:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:52.142 17:33:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:52.142 17:33:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:52.142 ************************************ 00:36:52.142 START TEST raid_state_function_test 00:36:52.142 ************************************ 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:36:52.142 Process raid pid: 73465 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73465 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73465' 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73465 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73465 ']' 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:52.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:52.142 17:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:52.142 [2024-11-26 17:33:52.738997] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:52.142 [2024-11-26 17:33:52.739225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:52.401 [2024-11-26 17:33:52.917256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.401 [2024-11-26 17:33:53.043687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:52.660 [2024-11-26 17:33:53.280237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:52.660 [2024-11-26 17:33:53.280381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:52.919 17:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.919 17:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:36:52.919 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:52.919 17:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.919 17:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:52.919 [2024-11-26 17:33:53.606599] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:52.919 [2024-11-26 17:33:53.606657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:52.919 [2024-11-26 17:33:53.606672] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:52.920 [2024-11-26 17:33:53.606683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:52.920 [2024-11-26 17:33:53.606689] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:52.920 [2024-11-26 17:33:53.606699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:52.920 [2024-11-26 17:33:53.606722] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:52.920 [2024-11-26 17:33:53.606731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:53.180 "name": "Existed_Raid", 00:36:53.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.180 "strip_size_kb": 0, 00:36:53.180 "state": "configuring", 00:36:53.180 "raid_level": "raid1", 00:36:53.180 "superblock": false, 00:36:53.180 "num_base_bdevs": 4, 00:36:53.180 "num_base_bdevs_discovered": 0, 00:36:53.180 "num_base_bdevs_operational": 4, 00:36:53.180 "base_bdevs_list": [ 00:36:53.180 { 00:36:53.180 "name": "BaseBdev1", 00:36:53.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.180 "is_configured": false, 00:36:53.180 "data_offset": 0, 00:36:53.180 "data_size": 0 00:36:53.180 }, 00:36:53.180 { 00:36:53.180 "name": "BaseBdev2", 00:36:53.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.180 "is_configured": false, 00:36:53.180 "data_offset": 0, 00:36:53.180 "data_size": 0 00:36:53.180 }, 00:36:53.180 { 00:36:53.180 "name": "BaseBdev3", 00:36:53.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.180 "is_configured": false, 00:36:53.180 "data_offset": 0, 00:36:53.180 "data_size": 0 00:36:53.180 }, 00:36:53.180 { 00:36:53.180 "name": "BaseBdev4", 00:36:53.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.180 "is_configured": false, 00:36:53.180 "data_offset": 0, 00:36:53.180 "data_size": 0 00:36:53.180 } 00:36:53.180 ] 00:36:53.180 }' 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:53.180 17:33:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.439 [2024-11-26 17:33:54.105702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:53.439 [2024-11-26 17:33:54.105827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.439 [2024-11-26 17:33:54.117674] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:53.439 [2024-11-26 17:33:54.117770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:53.439 [2024-11-26 17:33:54.117802] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:53.439 [2024-11-26 17:33:54.117828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:53.439 [2024-11-26 17:33:54.117849] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:53.439 [2024-11-26 17:33:54.117872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:53.439 [2024-11-26 17:33:54.117892] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:53.439 [2024-11-26 17:33:54.117915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.439 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.698 [2024-11-26 17:33:54.170374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:53.698 BaseBdev1 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.698 [ 00:36:53.698 { 00:36:53.698 "name": "BaseBdev1", 00:36:53.698 "aliases": [ 00:36:53.698 "5d769972-7509-4e63-8bf5-0b849cc005dd" 00:36:53.698 ], 00:36:53.698 "product_name": "Malloc disk", 00:36:53.698 "block_size": 512, 00:36:53.698 "num_blocks": 65536, 00:36:53.698 "uuid": "5d769972-7509-4e63-8bf5-0b849cc005dd", 00:36:53.698 "assigned_rate_limits": { 00:36:53.698 "rw_ios_per_sec": 0, 00:36:53.698 "rw_mbytes_per_sec": 0, 00:36:53.698 "r_mbytes_per_sec": 0, 00:36:53.698 "w_mbytes_per_sec": 0 00:36:53.698 }, 00:36:53.698 "claimed": true, 00:36:53.698 "claim_type": "exclusive_write", 00:36:53.698 "zoned": false, 00:36:53.698 "supported_io_types": { 00:36:53.698 "read": true, 00:36:53.698 "write": true, 00:36:53.698 "unmap": true, 00:36:53.698 "flush": true, 00:36:53.698 "reset": true, 00:36:53.698 "nvme_admin": false, 00:36:53.698 "nvme_io": false, 00:36:53.698 "nvme_io_md": false, 00:36:53.698 "write_zeroes": true, 00:36:53.698 "zcopy": true, 00:36:53.698 "get_zone_info": false, 00:36:53.698 "zone_management": false, 00:36:53.698 "zone_append": false, 00:36:53.698 "compare": false, 00:36:53.698 "compare_and_write": false, 00:36:53.698 "abort": true, 00:36:53.698 "seek_hole": false, 00:36:53.698 "seek_data": false, 00:36:53.698 "copy": true, 00:36:53.698 "nvme_iov_md": false 00:36:53.698 }, 00:36:53.698 "memory_domains": [ 00:36:53.698 { 00:36:53.698 "dma_device_id": "system", 00:36:53.698 "dma_device_type": 1 00:36:53.698 }, 00:36:53.698 { 00:36:53.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:53.698 "dma_device_type": 2 00:36:53.698 } 00:36:53.698 ], 00:36:53.698 "driver_specific": {} 00:36:53.698 } 00:36:53.698 ] 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.698 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:53.698 "name": "Existed_Raid", 00:36:53.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.698 "strip_size_kb": 0, 00:36:53.698 "state": "configuring", 00:36:53.698 "raid_level": "raid1", 00:36:53.698 "superblock": false, 00:36:53.698 "num_base_bdevs": 4, 00:36:53.698 "num_base_bdevs_discovered": 1, 00:36:53.698 "num_base_bdevs_operational": 4, 00:36:53.698 "base_bdevs_list": [ 00:36:53.698 { 00:36:53.698 "name": "BaseBdev1", 00:36:53.698 "uuid": "5d769972-7509-4e63-8bf5-0b849cc005dd", 00:36:53.698 "is_configured": true, 00:36:53.698 "data_offset": 0, 00:36:53.698 "data_size": 65536 00:36:53.698 }, 00:36:53.698 { 00:36:53.698 "name": "BaseBdev2", 00:36:53.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.698 "is_configured": false, 00:36:53.698 "data_offset": 0, 00:36:53.698 "data_size": 0 00:36:53.698 }, 00:36:53.698 { 00:36:53.698 "name": "BaseBdev3", 00:36:53.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.699 "is_configured": false, 00:36:53.699 "data_offset": 0, 00:36:53.699 "data_size": 0 00:36:53.699 }, 00:36:53.699 { 00:36:53.699 "name": "BaseBdev4", 00:36:53.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.699 "is_configured": false, 00:36:53.699 "data_offset": 0, 00:36:53.699 "data_size": 0 00:36:53.699 } 00:36:53.699 ] 00:36:53.699 }' 00:36:53.699 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:53.699 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.267 [2024-11-26 17:33:54.661608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:54.267 [2024-11-26 17:33:54.661740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.267 [2024-11-26 17:33:54.673640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:54.267 [2024-11-26 17:33:54.675764] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:54.267 [2024-11-26 17:33:54.675862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:54.267 [2024-11-26 17:33:54.675879] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:54.267 [2024-11-26 17:33:54.675893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:54.267 [2024-11-26 17:33:54.675901] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:54.267 [2024-11-26 17:33:54.675911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:54.267 "name": "Existed_Raid", 00:36:54.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.267 "strip_size_kb": 0, 00:36:54.267 "state": "configuring", 00:36:54.267 "raid_level": "raid1", 00:36:54.267 "superblock": false, 00:36:54.267 "num_base_bdevs": 4, 00:36:54.267 "num_base_bdevs_discovered": 1, 00:36:54.267 "num_base_bdevs_operational": 4, 00:36:54.267 "base_bdevs_list": [ 00:36:54.267 { 00:36:54.267 "name": "BaseBdev1", 00:36:54.267 "uuid": "5d769972-7509-4e63-8bf5-0b849cc005dd", 00:36:54.267 "is_configured": true, 00:36:54.267 "data_offset": 0, 00:36:54.267 "data_size": 65536 00:36:54.267 }, 00:36:54.267 { 00:36:54.267 "name": "BaseBdev2", 00:36:54.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.267 "is_configured": false, 00:36:54.267 "data_offset": 0, 00:36:54.267 "data_size": 0 00:36:54.267 }, 00:36:54.267 { 00:36:54.267 "name": "BaseBdev3", 00:36:54.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.267 "is_configured": false, 00:36:54.267 "data_offset": 0, 00:36:54.267 "data_size": 0 00:36:54.267 }, 00:36:54.267 { 00:36:54.267 "name": "BaseBdev4", 00:36:54.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.267 "is_configured": false, 00:36:54.267 "data_offset": 0, 00:36:54.267 "data_size": 0 00:36:54.267 } 00:36:54.267 ] 00:36:54.267 }' 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:54.267 17:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.527 [2024-11-26 17:33:55.146407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:54.527 BaseBdev2 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.527 [ 00:36:54.527 { 00:36:54.527 "name": "BaseBdev2", 00:36:54.527 "aliases": [ 00:36:54.527 "0d04d49c-9013-4ea8-8a54-41beb33fad9e" 00:36:54.527 ], 00:36:54.527 "product_name": "Malloc disk", 00:36:54.527 "block_size": 512, 00:36:54.527 "num_blocks": 65536, 00:36:54.527 "uuid": "0d04d49c-9013-4ea8-8a54-41beb33fad9e", 00:36:54.527 "assigned_rate_limits": { 00:36:54.527 "rw_ios_per_sec": 0, 00:36:54.527 "rw_mbytes_per_sec": 0, 00:36:54.527 "r_mbytes_per_sec": 0, 00:36:54.527 "w_mbytes_per_sec": 0 00:36:54.527 }, 00:36:54.527 "claimed": true, 00:36:54.527 "claim_type": "exclusive_write", 00:36:54.527 "zoned": false, 00:36:54.527 "supported_io_types": { 00:36:54.527 "read": true, 00:36:54.527 "write": true, 00:36:54.527 "unmap": true, 00:36:54.527 "flush": true, 00:36:54.527 "reset": true, 00:36:54.527 "nvme_admin": false, 00:36:54.527 "nvme_io": false, 00:36:54.527 "nvme_io_md": false, 00:36:54.527 "write_zeroes": true, 00:36:54.527 "zcopy": true, 00:36:54.527 "get_zone_info": false, 00:36:54.527 "zone_management": false, 00:36:54.527 "zone_append": false, 00:36:54.527 "compare": false, 00:36:54.527 "compare_and_write": false, 00:36:54.527 "abort": true, 00:36:54.527 "seek_hole": false, 00:36:54.527 "seek_data": false, 00:36:54.527 "copy": true, 00:36:54.527 "nvme_iov_md": false 00:36:54.527 }, 00:36:54.527 "memory_domains": [ 00:36:54.527 { 00:36:54.527 "dma_device_id": "system", 00:36:54.527 "dma_device_type": 1 00:36:54.527 }, 00:36:54.527 { 00:36:54.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:54.527 "dma_device_type": 2 00:36:54.527 } 00:36:54.527 ], 00:36:54.527 "driver_specific": {} 00:36:54.527 } 00:36:54.527 ] 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:54.527 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.817 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:54.817 "name": "Existed_Raid", 00:36:54.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.817 "strip_size_kb": 0, 00:36:54.817 "state": "configuring", 00:36:54.817 "raid_level": "raid1", 00:36:54.817 "superblock": false, 00:36:54.817 "num_base_bdevs": 4, 00:36:54.817 "num_base_bdevs_discovered": 2, 00:36:54.817 "num_base_bdevs_operational": 4, 00:36:54.817 "base_bdevs_list": [ 00:36:54.817 { 00:36:54.817 "name": "BaseBdev1", 00:36:54.817 "uuid": "5d769972-7509-4e63-8bf5-0b849cc005dd", 00:36:54.817 "is_configured": true, 00:36:54.817 "data_offset": 0, 00:36:54.817 "data_size": 65536 00:36:54.817 }, 00:36:54.817 { 00:36:54.817 "name": "BaseBdev2", 00:36:54.817 "uuid": "0d04d49c-9013-4ea8-8a54-41beb33fad9e", 00:36:54.817 "is_configured": true, 00:36:54.817 "data_offset": 0, 00:36:54.817 "data_size": 65536 00:36:54.817 }, 00:36:54.817 { 00:36:54.817 "name": "BaseBdev3", 00:36:54.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.817 "is_configured": false, 00:36:54.817 "data_offset": 0, 00:36:54.817 "data_size": 0 00:36:54.817 }, 00:36:54.817 { 00:36:54.817 "name": "BaseBdev4", 00:36:54.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.817 "is_configured": false, 00:36:54.817 "data_offset": 0, 00:36:54.817 "data_size": 0 00:36:54.817 } 00:36:54.817 ] 00:36:54.817 }' 00:36:54.817 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:54.817 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.079 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:55.079 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.079 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.079 [2024-11-26 17:33:55.698293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:55.079 BaseBdev3 00:36:55.079 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.079 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:55.079 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:55.079 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.080 [ 00:36:55.080 { 00:36:55.080 "name": "BaseBdev3", 00:36:55.080 "aliases": [ 00:36:55.080 "b949babe-ce94-4cba-972f-585ad96a77f2" 00:36:55.080 ], 00:36:55.080 "product_name": "Malloc disk", 00:36:55.080 "block_size": 512, 00:36:55.080 "num_blocks": 65536, 00:36:55.080 "uuid": "b949babe-ce94-4cba-972f-585ad96a77f2", 00:36:55.080 "assigned_rate_limits": { 00:36:55.080 "rw_ios_per_sec": 0, 00:36:55.080 "rw_mbytes_per_sec": 0, 00:36:55.080 "r_mbytes_per_sec": 0, 00:36:55.080 "w_mbytes_per_sec": 0 00:36:55.080 }, 00:36:55.080 "claimed": true, 00:36:55.080 "claim_type": "exclusive_write", 00:36:55.080 "zoned": false, 00:36:55.080 "supported_io_types": { 00:36:55.080 "read": true, 00:36:55.080 "write": true, 00:36:55.080 "unmap": true, 00:36:55.080 "flush": true, 00:36:55.080 "reset": true, 00:36:55.080 "nvme_admin": false, 00:36:55.080 "nvme_io": false, 00:36:55.080 "nvme_io_md": false, 00:36:55.080 "write_zeroes": true, 00:36:55.080 "zcopy": true, 00:36:55.080 "get_zone_info": false, 00:36:55.080 "zone_management": false, 00:36:55.080 "zone_append": false, 00:36:55.080 "compare": false, 00:36:55.080 "compare_and_write": false, 00:36:55.080 "abort": true, 00:36:55.080 "seek_hole": false, 00:36:55.080 "seek_data": false, 00:36:55.080 "copy": true, 00:36:55.080 "nvme_iov_md": false 00:36:55.080 }, 00:36:55.080 "memory_domains": [ 00:36:55.080 { 00:36:55.080 "dma_device_id": "system", 00:36:55.080 "dma_device_type": 1 00:36:55.080 }, 00:36:55.080 { 00:36:55.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:55.080 "dma_device_type": 2 00:36:55.080 } 00:36:55.080 ], 00:36:55.080 "driver_specific": {} 00:36:55.080 } 00:36:55.080 ] 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.080 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.339 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:55.339 "name": "Existed_Raid", 00:36:55.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:55.339 "strip_size_kb": 0, 00:36:55.339 "state": "configuring", 00:36:55.339 "raid_level": "raid1", 00:36:55.339 "superblock": false, 00:36:55.339 "num_base_bdevs": 4, 00:36:55.339 "num_base_bdevs_discovered": 3, 00:36:55.339 "num_base_bdevs_operational": 4, 00:36:55.339 "base_bdevs_list": [ 00:36:55.339 { 00:36:55.339 "name": "BaseBdev1", 00:36:55.339 "uuid": "5d769972-7509-4e63-8bf5-0b849cc005dd", 00:36:55.339 "is_configured": true, 00:36:55.339 "data_offset": 0, 00:36:55.339 "data_size": 65536 00:36:55.339 }, 00:36:55.339 { 00:36:55.339 "name": "BaseBdev2", 00:36:55.339 "uuid": "0d04d49c-9013-4ea8-8a54-41beb33fad9e", 00:36:55.339 "is_configured": true, 00:36:55.340 "data_offset": 0, 00:36:55.340 "data_size": 65536 00:36:55.340 }, 00:36:55.340 { 00:36:55.340 "name": "BaseBdev3", 00:36:55.340 "uuid": "b949babe-ce94-4cba-972f-585ad96a77f2", 00:36:55.340 "is_configured": true, 00:36:55.340 "data_offset": 0, 00:36:55.340 "data_size": 65536 00:36:55.340 }, 00:36:55.340 { 00:36:55.340 "name": "BaseBdev4", 00:36:55.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:55.340 "is_configured": false, 00:36:55.340 "data_offset": 0, 00:36:55.340 "data_size": 0 00:36:55.340 } 00:36:55.340 ] 00:36:55.340 }' 00:36:55.340 17:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:55.340 17:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.600 [2024-11-26 17:33:56.268442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:55.600 [2024-11-26 17:33:56.268612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:55.600 [2024-11-26 17:33:56.268641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:36:55.600 [2024-11-26 17:33:56.269000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:55.600 [2024-11-26 17:33:56.269260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:55.600 [2024-11-26 17:33:56.269317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:55.600 [2024-11-26 17:33:56.269705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:55.600 BaseBdev4 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.600 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.861 [ 00:36:55.861 { 00:36:55.861 "name": "BaseBdev4", 00:36:55.861 "aliases": [ 00:36:55.861 "d51bdf0f-17ed-49e8-9e7a-fa7dbc472d31" 00:36:55.861 ], 00:36:55.861 "product_name": "Malloc disk", 00:36:55.861 "block_size": 512, 00:36:55.861 "num_blocks": 65536, 00:36:55.861 "uuid": "d51bdf0f-17ed-49e8-9e7a-fa7dbc472d31", 00:36:55.861 "assigned_rate_limits": { 00:36:55.861 "rw_ios_per_sec": 0, 00:36:55.861 "rw_mbytes_per_sec": 0, 00:36:55.861 "r_mbytes_per_sec": 0, 00:36:55.861 "w_mbytes_per_sec": 0 00:36:55.861 }, 00:36:55.861 "claimed": true, 00:36:55.861 "claim_type": "exclusive_write", 00:36:55.861 "zoned": false, 00:36:55.861 "supported_io_types": { 00:36:55.861 "read": true, 00:36:55.861 "write": true, 00:36:55.861 "unmap": true, 00:36:55.861 "flush": true, 00:36:55.861 "reset": true, 00:36:55.861 "nvme_admin": false, 00:36:55.861 "nvme_io": false, 00:36:55.861 "nvme_io_md": false, 00:36:55.861 "write_zeroes": true, 00:36:55.861 "zcopy": true, 00:36:55.861 "get_zone_info": false, 00:36:55.861 "zone_management": false, 00:36:55.861 "zone_append": false, 00:36:55.861 "compare": false, 00:36:55.861 "compare_and_write": false, 00:36:55.861 "abort": true, 00:36:55.861 "seek_hole": false, 00:36:55.861 "seek_data": false, 00:36:55.861 "copy": true, 00:36:55.861 "nvme_iov_md": false 00:36:55.861 }, 00:36:55.861 "memory_domains": [ 00:36:55.861 { 00:36:55.861 "dma_device_id": "system", 00:36:55.861 "dma_device_type": 1 00:36:55.861 }, 00:36:55.861 { 00:36:55.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:55.861 "dma_device_type": 2 00:36:55.861 } 00:36:55.861 ], 00:36:55.861 "driver_specific": {} 00:36:55.861 } 00:36:55.861 ] 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:55.861 "name": "Existed_Raid", 00:36:55.861 "uuid": "7c247d8c-92b1-4771-8e23-ebd14da9db40", 00:36:55.861 "strip_size_kb": 0, 00:36:55.861 "state": "online", 00:36:55.861 "raid_level": "raid1", 00:36:55.861 "superblock": false, 00:36:55.861 "num_base_bdevs": 4, 00:36:55.861 "num_base_bdevs_discovered": 4, 00:36:55.861 "num_base_bdevs_operational": 4, 00:36:55.861 "base_bdevs_list": [ 00:36:55.861 { 00:36:55.861 "name": "BaseBdev1", 00:36:55.861 "uuid": "5d769972-7509-4e63-8bf5-0b849cc005dd", 00:36:55.861 "is_configured": true, 00:36:55.861 "data_offset": 0, 00:36:55.861 "data_size": 65536 00:36:55.861 }, 00:36:55.861 { 00:36:55.861 "name": "BaseBdev2", 00:36:55.861 "uuid": "0d04d49c-9013-4ea8-8a54-41beb33fad9e", 00:36:55.861 "is_configured": true, 00:36:55.861 "data_offset": 0, 00:36:55.861 "data_size": 65536 00:36:55.861 }, 00:36:55.861 { 00:36:55.861 "name": "BaseBdev3", 00:36:55.861 "uuid": "b949babe-ce94-4cba-972f-585ad96a77f2", 00:36:55.861 "is_configured": true, 00:36:55.861 "data_offset": 0, 00:36:55.861 "data_size": 65536 00:36:55.861 }, 00:36:55.861 { 00:36:55.861 "name": "BaseBdev4", 00:36:55.861 "uuid": "d51bdf0f-17ed-49e8-9e7a-fa7dbc472d31", 00:36:55.861 "is_configured": true, 00:36:55.861 "data_offset": 0, 00:36:55.861 "data_size": 65536 00:36:55.861 } 00:36:55.861 ] 00:36:55.861 }' 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:55.861 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.119 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:56.119 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:56.119 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:56.119 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:56.120 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:56.120 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:56.120 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:56.120 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:56.120 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.120 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.120 [2024-11-26 17:33:56.804001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:56.379 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.379 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:56.379 "name": "Existed_Raid", 00:36:56.379 "aliases": [ 00:36:56.379 "7c247d8c-92b1-4771-8e23-ebd14da9db40" 00:36:56.379 ], 00:36:56.379 "product_name": "Raid Volume", 00:36:56.379 "block_size": 512, 00:36:56.379 "num_blocks": 65536, 00:36:56.379 "uuid": "7c247d8c-92b1-4771-8e23-ebd14da9db40", 00:36:56.379 "assigned_rate_limits": { 00:36:56.379 "rw_ios_per_sec": 0, 00:36:56.379 "rw_mbytes_per_sec": 0, 00:36:56.379 "r_mbytes_per_sec": 0, 00:36:56.379 "w_mbytes_per_sec": 0 00:36:56.379 }, 00:36:56.379 "claimed": false, 00:36:56.379 "zoned": false, 00:36:56.379 "supported_io_types": { 00:36:56.379 "read": true, 00:36:56.379 "write": true, 00:36:56.379 "unmap": false, 00:36:56.379 "flush": false, 00:36:56.379 "reset": true, 00:36:56.379 "nvme_admin": false, 00:36:56.379 "nvme_io": false, 00:36:56.379 "nvme_io_md": false, 00:36:56.379 "write_zeroes": true, 00:36:56.379 "zcopy": false, 00:36:56.379 "get_zone_info": false, 00:36:56.379 "zone_management": false, 00:36:56.379 "zone_append": false, 00:36:56.379 "compare": false, 00:36:56.379 "compare_and_write": false, 00:36:56.379 "abort": false, 00:36:56.379 "seek_hole": false, 00:36:56.379 "seek_data": false, 00:36:56.379 "copy": false, 00:36:56.380 "nvme_iov_md": false 00:36:56.380 }, 00:36:56.380 "memory_domains": [ 00:36:56.380 { 00:36:56.380 "dma_device_id": "system", 00:36:56.380 "dma_device_type": 1 00:36:56.380 }, 00:36:56.380 { 00:36:56.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:56.380 "dma_device_type": 2 00:36:56.380 }, 00:36:56.380 { 00:36:56.380 "dma_device_id": "system", 00:36:56.380 "dma_device_type": 1 00:36:56.380 }, 00:36:56.380 { 00:36:56.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:56.380 "dma_device_type": 2 00:36:56.380 }, 00:36:56.380 { 00:36:56.380 "dma_device_id": "system", 00:36:56.380 "dma_device_type": 1 00:36:56.380 }, 00:36:56.380 { 00:36:56.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:56.380 "dma_device_type": 2 00:36:56.380 }, 00:36:56.380 { 00:36:56.380 "dma_device_id": "system", 00:36:56.380 "dma_device_type": 1 00:36:56.380 }, 00:36:56.380 { 00:36:56.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:56.380 "dma_device_type": 2 00:36:56.380 } 00:36:56.380 ], 00:36:56.380 "driver_specific": { 00:36:56.380 "raid": { 00:36:56.380 "uuid": "7c247d8c-92b1-4771-8e23-ebd14da9db40", 00:36:56.380 "strip_size_kb": 0, 00:36:56.380 "state": "online", 00:36:56.380 "raid_level": "raid1", 00:36:56.380 "superblock": false, 00:36:56.380 "num_base_bdevs": 4, 00:36:56.380 "num_base_bdevs_discovered": 4, 00:36:56.380 "num_base_bdevs_operational": 4, 00:36:56.380 "base_bdevs_list": [ 00:36:56.380 { 00:36:56.380 "name": "BaseBdev1", 00:36:56.380 "uuid": "5d769972-7509-4e63-8bf5-0b849cc005dd", 00:36:56.380 "is_configured": true, 00:36:56.380 "data_offset": 0, 00:36:56.380 "data_size": 65536 00:36:56.380 }, 00:36:56.380 { 00:36:56.380 "name": "BaseBdev2", 00:36:56.380 "uuid": "0d04d49c-9013-4ea8-8a54-41beb33fad9e", 00:36:56.380 "is_configured": true, 00:36:56.380 "data_offset": 0, 00:36:56.380 "data_size": 65536 00:36:56.380 }, 00:36:56.380 { 00:36:56.380 "name": "BaseBdev3", 00:36:56.380 "uuid": "b949babe-ce94-4cba-972f-585ad96a77f2", 00:36:56.380 "is_configured": true, 00:36:56.380 "data_offset": 0, 00:36:56.380 "data_size": 65536 00:36:56.380 }, 00:36:56.380 { 00:36:56.380 "name": "BaseBdev4", 00:36:56.380 "uuid": "d51bdf0f-17ed-49e8-9e7a-fa7dbc472d31", 00:36:56.380 "is_configured": true, 00:36:56.380 "data_offset": 0, 00:36:56.380 "data_size": 65536 00:36:56.380 } 00:36:56.380 ] 00:36:56.380 } 00:36:56.380 } 00:36:56.380 }' 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:56.380 BaseBdev2 00:36:56.380 BaseBdev3 00:36:56.380 BaseBdev4' 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.380 17:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:56.380 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.640 [2024-11-26 17:33:57.111205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:56.640 "name": "Existed_Raid", 00:36:56.640 "uuid": "7c247d8c-92b1-4771-8e23-ebd14da9db40", 00:36:56.640 "strip_size_kb": 0, 00:36:56.640 "state": "online", 00:36:56.640 "raid_level": "raid1", 00:36:56.640 "superblock": false, 00:36:56.640 "num_base_bdevs": 4, 00:36:56.640 "num_base_bdevs_discovered": 3, 00:36:56.640 "num_base_bdevs_operational": 3, 00:36:56.640 "base_bdevs_list": [ 00:36:56.640 { 00:36:56.640 "name": null, 00:36:56.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.640 "is_configured": false, 00:36:56.640 "data_offset": 0, 00:36:56.640 "data_size": 65536 00:36:56.640 }, 00:36:56.640 { 00:36:56.640 "name": "BaseBdev2", 00:36:56.640 "uuid": "0d04d49c-9013-4ea8-8a54-41beb33fad9e", 00:36:56.640 "is_configured": true, 00:36:56.640 "data_offset": 0, 00:36:56.640 "data_size": 65536 00:36:56.640 }, 00:36:56.640 { 00:36:56.640 "name": "BaseBdev3", 00:36:56.640 "uuid": "b949babe-ce94-4cba-972f-585ad96a77f2", 00:36:56.640 "is_configured": true, 00:36:56.640 "data_offset": 0, 00:36:56.640 "data_size": 65536 00:36:56.640 }, 00:36:56.640 { 00:36:56.640 "name": "BaseBdev4", 00:36:56.640 "uuid": "d51bdf0f-17ed-49e8-9e7a-fa7dbc472d31", 00:36:56.640 "is_configured": true, 00:36:56.640 "data_offset": 0, 00:36:56.640 "data_size": 65536 00:36:56.640 } 00:36:56.640 ] 00:36:56.640 }' 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:56.640 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.210 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:57.210 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:57.210 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.210 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:57.210 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.210 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.211 [2024-11-26 17:33:57.721102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.211 17:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.211 [2024-11-26 17:33:57.896921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.470 [2024-11-26 17:33:58.055706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:36:57.470 [2024-11-26 17:33:58.055814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:57.470 [2024-11-26 17:33:58.159686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:57.470 [2024-11-26 17:33:58.159742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:57.470 [2024-11-26 17:33:58.159756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:57.470 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.731 BaseBdev2 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.731 [ 00:36:57.731 { 00:36:57.731 "name": "BaseBdev2", 00:36:57.731 "aliases": [ 00:36:57.731 "e9d0f247-38b8-4e74-97f6-ba019cce4ae1" 00:36:57.731 ], 00:36:57.731 "product_name": "Malloc disk", 00:36:57.731 "block_size": 512, 00:36:57.731 "num_blocks": 65536, 00:36:57.731 "uuid": "e9d0f247-38b8-4e74-97f6-ba019cce4ae1", 00:36:57.731 "assigned_rate_limits": { 00:36:57.731 "rw_ios_per_sec": 0, 00:36:57.731 "rw_mbytes_per_sec": 0, 00:36:57.731 "r_mbytes_per_sec": 0, 00:36:57.731 "w_mbytes_per_sec": 0 00:36:57.731 }, 00:36:57.731 "claimed": false, 00:36:57.731 "zoned": false, 00:36:57.731 "supported_io_types": { 00:36:57.731 "read": true, 00:36:57.731 "write": true, 00:36:57.731 "unmap": true, 00:36:57.731 "flush": true, 00:36:57.731 "reset": true, 00:36:57.731 "nvme_admin": false, 00:36:57.731 "nvme_io": false, 00:36:57.731 "nvme_io_md": false, 00:36:57.731 "write_zeroes": true, 00:36:57.731 "zcopy": true, 00:36:57.731 "get_zone_info": false, 00:36:57.731 "zone_management": false, 00:36:57.731 "zone_append": false, 00:36:57.731 "compare": false, 00:36:57.731 "compare_and_write": false, 00:36:57.731 "abort": true, 00:36:57.731 "seek_hole": false, 00:36:57.731 "seek_data": false, 00:36:57.731 "copy": true, 00:36:57.731 "nvme_iov_md": false 00:36:57.731 }, 00:36:57.731 "memory_domains": [ 00:36:57.731 { 00:36:57.731 "dma_device_id": "system", 00:36:57.731 "dma_device_type": 1 00:36:57.731 }, 00:36:57.731 { 00:36:57.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.731 "dma_device_type": 2 00:36:57.731 } 00:36:57.731 ], 00:36:57.731 "driver_specific": {} 00:36:57.731 } 00:36:57.731 ] 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.731 BaseBdev3 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.731 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.731 [ 00:36:57.731 { 00:36:57.731 "name": "BaseBdev3", 00:36:57.731 "aliases": [ 00:36:57.731 "f57a5f82-e132-4bd9-a4cc-74662c9e0f24" 00:36:57.731 ], 00:36:57.731 "product_name": "Malloc disk", 00:36:57.731 "block_size": 512, 00:36:57.731 "num_blocks": 65536, 00:36:57.731 "uuid": "f57a5f82-e132-4bd9-a4cc-74662c9e0f24", 00:36:57.731 "assigned_rate_limits": { 00:36:57.731 "rw_ios_per_sec": 0, 00:36:57.731 "rw_mbytes_per_sec": 0, 00:36:57.731 "r_mbytes_per_sec": 0, 00:36:57.731 "w_mbytes_per_sec": 0 00:36:57.731 }, 00:36:57.731 "claimed": false, 00:36:57.731 "zoned": false, 00:36:57.731 "supported_io_types": { 00:36:57.731 "read": true, 00:36:57.731 "write": true, 00:36:57.731 "unmap": true, 00:36:57.731 "flush": true, 00:36:57.731 "reset": true, 00:36:57.731 "nvme_admin": false, 00:36:57.731 "nvme_io": false, 00:36:57.731 "nvme_io_md": false, 00:36:57.731 "write_zeroes": true, 00:36:57.731 "zcopy": true, 00:36:57.731 "get_zone_info": false, 00:36:57.732 "zone_management": false, 00:36:57.732 "zone_append": false, 00:36:57.732 "compare": false, 00:36:57.732 "compare_and_write": false, 00:36:57.732 "abort": true, 00:36:57.732 "seek_hole": false, 00:36:57.732 "seek_data": false, 00:36:57.732 "copy": true, 00:36:57.732 "nvme_iov_md": false 00:36:57.732 }, 00:36:57.732 "memory_domains": [ 00:36:57.732 { 00:36:57.732 "dma_device_id": "system", 00:36:57.732 "dma_device_type": 1 00:36:57.732 }, 00:36:57.732 { 00:36:57.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.732 "dma_device_type": 2 00:36:57.732 } 00:36:57.732 ], 00:36:57.732 "driver_specific": {} 00:36:57.732 } 00:36:57.732 ] 00:36:57.732 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.732 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:57.732 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:57.732 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:57.732 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:57.732 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.732 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.991 BaseBdev4 00:36:57.991 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.991 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:36:57.991 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:36:57.991 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.992 [ 00:36:57.992 { 00:36:57.992 "name": "BaseBdev4", 00:36:57.992 "aliases": [ 00:36:57.992 "76d838f5-d8d9-424c-bb4e-c475d521cdc1" 00:36:57.992 ], 00:36:57.992 "product_name": "Malloc disk", 00:36:57.992 "block_size": 512, 00:36:57.992 "num_blocks": 65536, 00:36:57.992 "uuid": "76d838f5-d8d9-424c-bb4e-c475d521cdc1", 00:36:57.992 "assigned_rate_limits": { 00:36:57.992 "rw_ios_per_sec": 0, 00:36:57.992 "rw_mbytes_per_sec": 0, 00:36:57.992 "r_mbytes_per_sec": 0, 00:36:57.992 "w_mbytes_per_sec": 0 00:36:57.992 }, 00:36:57.992 "claimed": false, 00:36:57.992 "zoned": false, 00:36:57.992 "supported_io_types": { 00:36:57.992 "read": true, 00:36:57.992 "write": true, 00:36:57.992 "unmap": true, 00:36:57.992 "flush": true, 00:36:57.992 "reset": true, 00:36:57.992 "nvme_admin": false, 00:36:57.992 "nvme_io": false, 00:36:57.992 "nvme_io_md": false, 00:36:57.992 "write_zeroes": true, 00:36:57.992 "zcopy": true, 00:36:57.992 "get_zone_info": false, 00:36:57.992 "zone_management": false, 00:36:57.992 "zone_append": false, 00:36:57.992 "compare": false, 00:36:57.992 "compare_and_write": false, 00:36:57.992 "abort": true, 00:36:57.992 "seek_hole": false, 00:36:57.992 "seek_data": false, 00:36:57.992 "copy": true, 00:36:57.992 "nvme_iov_md": false 00:36:57.992 }, 00:36:57.992 "memory_domains": [ 00:36:57.992 { 00:36:57.992 "dma_device_id": "system", 00:36:57.992 "dma_device_type": 1 00:36:57.992 }, 00:36:57.992 { 00:36:57.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.992 "dma_device_type": 2 00:36:57.992 } 00:36:57.992 ], 00:36:57.992 "driver_specific": {} 00:36:57.992 } 00:36:57.992 ] 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.992 [2024-11-26 17:33:58.485756] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:57.992 [2024-11-26 17:33:58.485871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:57.992 [2024-11-26 17:33:58.485926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:57.992 [2024-11-26 17:33:58.488181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:57.992 [2024-11-26 17:33:58.488288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:57.992 "name": "Existed_Raid", 00:36:57.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.992 "strip_size_kb": 0, 00:36:57.992 "state": "configuring", 00:36:57.992 "raid_level": "raid1", 00:36:57.992 "superblock": false, 00:36:57.992 "num_base_bdevs": 4, 00:36:57.992 "num_base_bdevs_discovered": 3, 00:36:57.992 "num_base_bdevs_operational": 4, 00:36:57.992 "base_bdevs_list": [ 00:36:57.992 { 00:36:57.992 "name": "BaseBdev1", 00:36:57.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.992 "is_configured": false, 00:36:57.992 "data_offset": 0, 00:36:57.992 "data_size": 0 00:36:57.992 }, 00:36:57.992 { 00:36:57.992 "name": "BaseBdev2", 00:36:57.992 "uuid": "e9d0f247-38b8-4e74-97f6-ba019cce4ae1", 00:36:57.992 "is_configured": true, 00:36:57.992 "data_offset": 0, 00:36:57.992 "data_size": 65536 00:36:57.992 }, 00:36:57.992 { 00:36:57.992 "name": "BaseBdev3", 00:36:57.992 "uuid": "f57a5f82-e132-4bd9-a4cc-74662c9e0f24", 00:36:57.992 "is_configured": true, 00:36:57.992 "data_offset": 0, 00:36:57.992 "data_size": 65536 00:36:57.992 }, 00:36:57.992 { 00:36:57.992 "name": "BaseBdev4", 00:36:57.992 "uuid": "76d838f5-d8d9-424c-bb4e-c475d521cdc1", 00:36:57.992 "is_configured": true, 00:36:57.992 "data_offset": 0, 00:36:57.992 "data_size": 65536 00:36:57.992 } 00:36:57.992 ] 00:36:57.992 }' 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:57.992 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.252 [2024-11-26 17:33:58.937097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:58.252 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:58.512 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:58.512 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:58.512 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.512 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.512 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.512 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:58.512 "name": "Existed_Raid", 00:36:58.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:58.512 "strip_size_kb": 0, 00:36:58.512 "state": "configuring", 00:36:58.512 "raid_level": "raid1", 00:36:58.512 "superblock": false, 00:36:58.512 "num_base_bdevs": 4, 00:36:58.512 "num_base_bdevs_discovered": 2, 00:36:58.512 "num_base_bdevs_operational": 4, 00:36:58.512 "base_bdevs_list": [ 00:36:58.512 { 00:36:58.512 "name": "BaseBdev1", 00:36:58.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:58.512 "is_configured": false, 00:36:58.512 "data_offset": 0, 00:36:58.512 "data_size": 0 00:36:58.512 }, 00:36:58.512 { 00:36:58.512 "name": null, 00:36:58.512 "uuid": "e9d0f247-38b8-4e74-97f6-ba019cce4ae1", 00:36:58.512 "is_configured": false, 00:36:58.512 "data_offset": 0, 00:36:58.512 "data_size": 65536 00:36:58.512 }, 00:36:58.512 { 00:36:58.512 "name": "BaseBdev3", 00:36:58.512 "uuid": "f57a5f82-e132-4bd9-a4cc-74662c9e0f24", 00:36:58.512 "is_configured": true, 00:36:58.512 "data_offset": 0, 00:36:58.512 "data_size": 65536 00:36:58.512 }, 00:36:58.512 { 00:36:58.512 "name": "BaseBdev4", 00:36:58.512 "uuid": "76d838f5-d8d9-424c-bb4e-c475d521cdc1", 00:36:58.512 "is_configured": true, 00:36:58.512 "data_offset": 0, 00:36:58.512 "data_size": 65536 00:36:58.512 } 00:36:58.512 ] 00:36:58.512 }' 00:36:58.513 17:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:58.513 17:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.771 [2024-11-26 17:33:59.446360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:58.771 BaseBdev1 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.771 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.030 [ 00:36:59.030 { 00:36:59.030 "name": "BaseBdev1", 00:36:59.030 "aliases": [ 00:36:59.030 "5eece1c7-c087-43bc-8ed9-bcd5f92bde68" 00:36:59.030 ], 00:36:59.030 "product_name": "Malloc disk", 00:36:59.030 "block_size": 512, 00:36:59.030 "num_blocks": 65536, 00:36:59.030 "uuid": "5eece1c7-c087-43bc-8ed9-bcd5f92bde68", 00:36:59.030 "assigned_rate_limits": { 00:36:59.030 "rw_ios_per_sec": 0, 00:36:59.030 "rw_mbytes_per_sec": 0, 00:36:59.030 "r_mbytes_per_sec": 0, 00:36:59.030 "w_mbytes_per_sec": 0 00:36:59.030 }, 00:36:59.030 "claimed": true, 00:36:59.030 "claim_type": "exclusive_write", 00:36:59.030 "zoned": false, 00:36:59.030 "supported_io_types": { 00:36:59.030 "read": true, 00:36:59.030 "write": true, 00:36:59.030 "unmap": true, 00:36:59.030 "flush": true, 00:36:59.030 "reset": true, 00:36:59.030 "nvme_admin": false, 00:36:59.030 "nvme_io": false, 00:36:59.030 "nvme_io_md": false, 00:36:59.030 "write_zeroes": true, 00:36:59.030 "zcopy": true, 00:36:59.030 "get_zone_info": false, 00:36:59.030 "zone_management": false, 00:36:59.030 "zone_append": false, 00:36:59.030 "compare": false, 00:36:59.030 "compare_and_write": false, 00:36:59.030 "abort": true, 00:36:59.030 "seek_hole": false, 00:36:59.030 "seek_data": false, 00:36:59.030 "copy": true, 00:36:59.030 "nvme_iov_md": false 00:36:59.030 }, 00:36:59.030 "memory_domains": [ 00:36:59.030 { 00:36:59.030 "dma_device_id": "system", 00:36:59.030 "dma_device_type": 1 00:36:59.030 }, 00:36:59.030 { 00:36:59.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:59.030 "dma_device_type": 2 00:36:59.030 } 00:36:59.030 ], 00:36:59.030 "driver_specific": {} 00:36:59.030 } 00:36:59.030 ] 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:59.030 "name": "Existed_Raid", 00:36:59.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:59.030 "strip_size_kb": 0, 00:36:59.030 "state": "configuring", 00:36:59.030 "raid_level": "raid1", 00:36:59.030 "superblock": false, 00:36:59.030 "num_base_bdevs": 4, 00:36:59.030 "num_base_bdevs_discovered": 3, 00:36:59.030 "num_base_bdevs_operational": 4, 00:36:59.030 "base_bdevs_list": [ 00:36:59.030 { 00:36:59.030 "name": "BaseBdev1", 00:36:59.030 "uuid": "5eece1c7-c087-43bc-8ed9-bcd5f92bde68", 00:36:59.030 "is_configured": true, 00:36:59.030 "data_offset": 0, 00:36:59.030 "data_size": 65536 00:36:59.030 }, 00:36:59.030 { 00:36:59.030 "name": null, 00:36:59.030 "uuid": "e9d0f247-38b8-4e74-97f6-ba019cce4ae1", 00:36:59.030 "is_configured": false, 00:36:59.030 "data_offset": 0, 00:36:59.030 "data_size": 65536 00:36:59.030 }, 00:36:59.030 { 00:36:59.030 "name": "BaseBdev3", 00:36:59.030 "uuid": "f57a5f82-e132-4bd9-a4cc-74662c9e0f24", 00:36:59.030 "is_configured": true, 00:36:59.030 "data_offset": 0, 00:36:59.030 "data_size": 65536 00:36:59.030 }, 00:36:59.030 { 00:36:59.030 "name": "BaseBdev4", 00:36:59.030 "uuid": "76d838f5-d8d9-424c-bb4e-c475d521cdc1", 00:36:59.030 "is_configured": true, 00:36:59.030 "data_offset": 0, 00:36:59.030 "data_size": 65536 00:36:59.030 } 00:36:59.030 ] 00:36:59.030 }' 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:59.030 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.290 [2024-11-26 17:33:59.933674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.290 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.550 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:59.550 "name": "Existed_Raid", 00:36:59.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:59.550 "strip_size_kb": 0, 00:36:59.550 "state": "configuring", 00:36:59.550 "raid_level": "raid1", 00:36:59.550 "superblock": false, 00:36:59.550 "num_base_bdevs": 4, 00:36:59.550 "num_base_bdevs_discovered": 2, 00:36:59.550 "num_base_bdevs_operational": 4, 00:36:59.550 "base_bdevs_list": [ 00:36:59.550 { 00:36:59.550 "name": "BaseBdev1", 00:36:59.550 "uuid": "5eece1c7-c087-43bc-8ed9-bcd5f92bde68", 00:36:59.550 "is_configured": true, 00:36:59.550 "data_offset": 0, 00:36:59.550 "data_size": 65536 00:36:59.550 }, 00:36:59.550 { 00:36:59.550 "name": null, 00:36:59.550 "uuid": "e9d0f247-38b8-4e74-97f6-ba019cce4ae1", 00:36:59.550 "is_configured": false, 00:36:59.550 "data_offset": 0, 00:36:59.550 "data_size": 65536 00:36:59.550 }, 00:36:59.550 { 00:36:59.550 "name": null, 00:36:59.550 "uuid": "f57a5f82-e132-4bd9-a4cc-74662c9e0f24", 00:36:59.550 "is_configured": false, 00:36:59.550 "data_offset": 0, 00:36:59.550 "data_size": 65536 00:36:59.550 }, 00:36:59.550 { 00:36:59.550 "name": "BaseBdev4", 00:36:59.550 "uuid": "76d838f5-d8d9-424c-bb4e-c475d521cdc1", 00:36:59.550 "is_configured": true, 00:36:59.550 "data_offset": 0, 00:36:59.550 "data_size": 65536 00:36:59.550 } 00:36:59.550 ] 00:36:59.550 }' 00:36:59.550 17:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:59.550 17:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.810 [2024-11-26 17:34:00.436789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:59.810 "name": "Existed_Raid", 00:36:59.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:59.810 "strip_size_kb": 0, 00:36:59.810 "state": "configuring", 00:36:59.810 "raid_level": "raid1", 00:36:59.810 "superblock": false, 00:36:59.810 "num_base_bdevs": 4, 00:36:59.810 "num_base_bdevs_discovered": 3, 00:36:59.810 "num_base_bdevs_operational": 4, 00:36:59.810 "base_bdevs_list": [ 00:36:59.810 { 00:36:59.810 "name": "BaseBdev1", 00:36:59.810 "uuid": "5eece1c7-c087-43bc-8ed9-bcd5f92bde68", 00:36:59.810 "is_configured": true, 00:36:59.810 "data_offset": 0, 00:36:59.810 "data_size": 65536 00:36:59.810 }, 00:36:59.810 { 00:36:59.810 "name": null, 00:36:59.810 "uuid": "e9d0f247-38b8-4e74-97f6-ba019cce4ae1", 00:36:59.810 "is_configured": false, 00:36:59.810 "data_offset": 0, 00:36:59.810 "data_size": 65536 00:36:59.810 }, 00:36:59.810 { 00:36:59.810 "name": "BaseBdev3", 00:36:59.810 "uuid": "f57a5f82-e132-4bd9-a4cc-74662c9e0f24", 00:36:59.810 "is_configured": true, 00:36:59.810 "data_offset": 0, 00:36:59.810 "data_size": 65536 00:36:59.810 }, 00:36:59.810 { 00:36:59.810 "name": "BaseBdev4", 00:36:59.810 "uuid": "76d838f5-d8d9-424c-bb4e-c475d521cdc1", 00:36:59.810 "is_configured": true, 00:36:59.810 "data_offset": 0, 00:36:59.810 "data_size": 65536 00:36:59.810 } 00:36:59.810 ] 00:36:59.810 }' 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:59.810 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.403 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.403 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.403 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.403 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:00.403 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.403 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:37:00.403 17:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:00.403 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.403 17:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.403 [2024-11-26 17:34:00.916092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:00.404 "name": "Existed_Raid", 00:37:00.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.404 "strip_size_kb": 0, 00:37:00.404 "state": "configuring", 00:37:00.404 "raid_level": "raid1", 00:37:00.404 "superblock": false, 00:37:00.404 "num_base_bdevs": 4, 00:37:00.404 "num_base_bdevs_discovered": 2, 00:37:00.404 "num_base_bdevs_operational": 4, 00:37:00.404 "base_bdevs_list": [ 00:37:00.404 { 00:37:00.404 "name": null, 00:37:00.404 "uuid": "5eece1c7-c087-43bc-8ed9-bcd5f92bde68", 00:37:00.404 "is_configured": false, 00:37:00.404 "data_offset": 0, 00:37:00.404 "data_size": 65536 00:37:00.404 }, 00:37:00.404 { 00:37:00.404 "name": null, 00:37:00.404 "uuid": "e9d0f247-38b8-4e74-97f6-ba019cce4ae1", 00:37:00.404 "is_configured": false, 00:37:00.404 "data_offset": 0, 00:37:00.404 "data_size": 65536 00:37:00.404 }, 00:37:00.404 { 00:37:00.404 "name": "BaseBdev3", 00:37:00.404 "uuid": "f57a5f82-e132-4bd9-a4cc-74662c9e0f24", 00:37:00.404 "is_configured": true, 00:37:00.404 "data_offset": 0, 00:37:00.404 "data_size": 65536 00:37:00.404 }, 00:37:00.404 { 00:37:00.404 "name": "BaseBdev4", 00:37:00.404 "uuid": "76d838f5-d8d9-424c-bb4e-c475d521cdc1", 00:37:00.404 "is_configured": true, 00:37:00.404 "data_offset": 0, 00:37:00.404 "data_size": 65536 00:37:00.404 } 00:37:00.404 ] 00:37:00.404 }' 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:00.404 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.973 [2024-11-26 17:34:01.524184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:00.973 "name": "Existed_Raid", 00:37:00.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.973 "strip_size_kb": 0, 00:37:00.973 "state": "configuring", 00:37:00.973 "raid_level": "raid1", 00:37:00.973 "superblock": false, 00:37:00.973 "num_base_bdevs": 4, 00:37:00.973 "num_base_bdevs_discovered": 3, 00:37:00.973 "num_base_bdevs_operational": 4, 00:37:00.973 "base_bdevs_list": [ 00:37:00.973 { 00:37:00.973 "name": null, 00:37:00.973 "uuid": "5eece1c7-c087-43bc-8ed9-bcd5f92bde68", 00:37:00.973 "is_configured": false, 00:37:00.973 "data_offset": 0, 00:37:00.973 "data_size": 65536 00:37:00.973 }, 00:37:00.973 { 00:37:00.973 "name": "BaseBdev2", 00:37:00.973 "uuid": "e9d0f247-38b8-4e74-97f6-ba019cce4ae1", 00:37:00.973 "is_configured": true, 00:37:00.973 "data_offset": 0, 00:37:00.973 "data_size": 65536 00:37:00.973 }, 00:37:00.973 { 00:37:00.973 "name": "BaseBdev3", 00:37:00.973 "uuid": "f57a5f82-e132-4bd9-a4cc-74662c9e0f24", 00:37:00.973 "is_configured": true, 00:37:00.973 "data_offset": 0, 00:37:00.973 "data_size": 65536 00:37:00.973 }, 00:37:00.973 { 00:37:00.973 "name": "BaseBdev4", 00:37:00.973 "uuid": "76d838f5-d8d9-424c-bb4e-c475d521cdc1", 00:37:00.973 "is_configured": true, 00:37:00.973 "data_offset": 0, 00:37:00.973 "data_size": 65536 00:37:00.973 } 00:37:00.973 ] 00:37:00.973 }' 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:00.973 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.541 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:01.541 17:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:01.541 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.541 17:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5eece1c7-c087-43bc-8ed9-bcd5f92bde68 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.541 [2024-11-26 17:34:02.111024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:01.541 [2024-11-26 17:34:02.111066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:01.541 [2024-11-26 17:34:02.111075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:37:01.541 [2024-11-26 17:34:02.111321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:37:01.541 [2024-11-26 17:34:02.111468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:01.541 [2024-11-26 17:34:02.111478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:37:01.541 [2024-11-26 17:34:02.111742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:01.541 NewBaseBdev 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.541 [ 00:37:01.541 { 00:37:01.541 "name": "NewBaseBdev", 00:37:01.541 "aliases": [ 00:37:01.541 "5eece1c7-c087-43bc-8ed9-bcd5f92bde68" 00:37:01.541 ], 00:37:01.541 "product_name": "Malloc disk", 00:37:01.541 "block_size": 512, 00:37:01.541 "num_blocks": 65536, 00:37:01.541 "uuid": "5eece1c7-c087-43bc-8ed9-bcd5f92bde68", 00:37:01.541 "assigned_rate_limits": { 00:37:01.541 "rw_ios_per_sec": 0, 00:37:01.541 "rw_mbytes_per_sec": 0, 00:37:01.541 "r_mbytes_per_sec": 0, 00:37:01.541 "w_mbytes_per_sec": 0 00:37:01.541 }, 00:37:01.541 "claimed": true, 00:37:01.541 "claim_type": "exclusive_write", 00:37:01.541 "zoned": false, 00:37:01.541 "supported_io_types": { 00:37:01.541 "read": true, 00:37:01.541 "write": true, 00:37:01.541 "unmap": true, 00:37:01.541 "flush": true, 00:37:01.541 "reset": true, 00:37:01.541 "nvme_admin": false, 00:37:01.541 "nvme_io": false, 00:37:01.541 "nvme_io_md": false, 00:37:01.541 "write_zeroes": true, 00:37:01.541 "zcopy": true, 00:37:01.541 "get_zone_info": false, 00:37:01.541 "zone_management": false, 00:37:01.541 "zone_append": false, 00:37:01.541 "compare": false, 00:37:01.541 "compare_and_write": false, 00:37:01.541 "abort": true, 00:37:01.541 "seek_hole": false, 00:37:01.541 "seek_data": false, 00:37:01.541 "copy": true, 00:37:01.541 "nvme_iov_md": false 00:37:01.541 }, 00:37:01.541 "memory_domains": [ 00:37:01.541 { 00:37:01.541 "dma_device_id": "system", 00:37:01.541 "dma_device_type": 1 00:37:01.541 }, 00:37:01.541 { 00:37:01.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:01.541 "dma_device_type": 2 00:37:01.541 } 00:37:01.541 ], 00:37:01.541 "driver_specific": {} 00:37:01.541 } 00:37:01.541 ] 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:01.541 "name": "Existed_Raid", 00:37:01.541 "uuid": "b26e99da-940c-44f5-9299-ad2baf2bd170", 00:37:01.541 "strip_size_kb": 0, 00:37:01.541 "state": "online", 00:37:01.541 "raid_level": "raid1", 00:37:01.541 "superblock": false, 00:37:01.541 "num_base_bdevs": 4, 00:37:01.541 "num_base_bdevs_discovered": 4, 00:37:01.541 "num_base_bdevs_operational": 4, 00:37:01.541 "base_bdevs_list": [ 00:37:01.541 { 00:37:01.541 "name": "NewBaseBdev", 00:37:01.541 "uuid": "5eece1c7-c087-43bc-8ed9-bcd5f92bde68", 00:37:01.541 "is_configured": true, 00:37:01.541 "data_offset": 0, 00:37:01.541 "data_size": 65536 00:37:01.541 }, 00:37:01.541 { 00:37:01.541 "name": "BaseBdev2", 00:37:01.541 "uuid": "e9d0f247-38b8-4e74-97f6-ba019cce4ae1", 00:37:01.541 "is_configured": true, 00:37:01.541 "data_offset": 0, 00:37:01.541 "data_size": 65536 00:37:01.541 }, 00:37:01.541 { 00:37:01.541 "name": "BaseBdev3", 00:37:01.541 "uuid": "f57a5f82-e132-4bd9-a4cc-74662c9e0f24", 00:37:01.541 "is_configured": true, 00:37:01.541 "data_offset": 0, 00:37:01.541 "data_size": 65536 00:37:01.541 }, 00:37:01.541 { 00:37:01.541 "name": "BaseBdev4", 00:37:01.541 "uuid": "76d838f5-d8d9-424c-bb4e-c475d521cdc1", 00:37:01.541 "is_configured": true, 00:37:01.541 "data_offset": 0, 00:37:01.541 "data_size": 65536 00:37:01.541 } 00:37:01.541 ] 00:37:01.541 }' 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:01.541 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.108 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:37:02.108 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.109 [2024-11-26 17:34:02.586647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:02.109 "name": "Existed_Raid", 00:37:02.109 "aliases": [ 00:37:02.109 "b26e99da-940c-44f5-9299-ad2baf2bd170" 00:37:02.109 ], 00:37:02.109 "product_name": "Raid Volume", 00:37:02.109 "block_size": 512, 00:37:02.109 "num_blocks": 65536, 00:37:02.109 "uuid": "b26e99da-940c-44f5-9299-ad2baf2bd170", 00:37:02.109 "assigned_rate_limits": { 00:37:02.109 "rw_ios_per_sec": 0, 00:37:02.109 "rw_mbytes_per_sec": 0, 00:37:02.109 "r_mbytes_per_sec": 0, 00:37:02.109 "w_mbytes_per_sec": 0 00:37:02.109 }, 00:37:02.109 "claimed": false, 00:37:02.109 "zoned": false, 00:37:02.109 "supported_io_types": { 00:37:02.109 "read": true, 00:37:02.109 "write": true, 00:37:02.109 "unmap": false, 00:37:02.109 "flush": false, 00:37:02.109 "reset": true, 00:37:02.109 "nvme_admin": false, 00:37:02.109 "nvme_io": false, 00:37:02.109 "nvme_io_md": false, 00:37:02.109 "write_zeroes": true, 00:37:02.109 "zcopy": false, 00:37:02.109 "get_zone_info": false, 00:37:02.109 "zone_management": false, 00:37:02.109 "zone_append": false, 00:37:02.109 "compare": false, 00:37:02.109 "compare_and_write": false, 00:37:02.109 "abort": false, 00:37:02.109 "seek_hole": false, 00:37:02.109 "seek_data": false, 00:37:02.109 "copy": false, 00:37:02.109 "nvme_iov_md": false 00:37:02.109 }, 00:37:02.109 "memory_domains": [ 00:37:02.109 { 00:37:02.109 "dma_device_id": "system", 00:37:02.109 "dma_device_type": 1 00:37:02.109 }, 00:37:02.109 { 00:37:02.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:02.109 "dma_device_type": 2 00:37:02.109 }, 00:37:02.109 { 00:37:02.109 "dma_device_id": "system", 00:37:02.109 "dma_device_type": 1 00:37:02.109 }, 00:37:02.109 { 00:37:02.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:02.109 "dma_device_type": 2 00:37:02.109 }, 00:37:02.109 { 00:37:02.109 "dma_device_id": "system", 00:37:02.109 "dma_device_type": 1 00:37:02.109 }, 00:37:02.109 { 00:37:02.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:02.109 "dma_device_type": 2 00:37:02.109 }, 00:37:02.109 { 00:37:02.109 "dma_device_id": "system", 00:37:02.109 "dma_device_type": 1 00:37:02.109 }, 00:37:02.109 { 00:37:02.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:02.109 "dma_device_type": 2 00:37:02.109 } 00:37:02.109 ], 00:37:02.109 "driver_specific": { 00:37:02.109 "raid": { 00:37:02.109 "uuid": "b26e99da-940c-44f5-9299-ad2baf2bd170", 00:37:02.109 "strip_size_kb": 0, 00:37:02.109 "state": "online", 00:37:02.109 "raid_level": "raid1", 00:37:02.109 "superblock": false, 00:37:02.109 "num_base_bdevs": 4, 00:37:02.109 "num_base_bdevs_discovered": 4, 00:37:02.109 "num_base_bdevs_operational": 4, 00:37:02.109 "base_bdevs_list": [ 00:37:02.109 { 00:37:02.109 "name": "NewBaseBdev", 00:37:02.109 "uuid": "5eece1c7-c087-43bc-8ed9-bcd5f92bde68", 00:37:02.109 "is_configured": true, 00:37:02.109 "data_offset": 0, 00:37:02.109 "data_size": 65536 00:37:02.109 }, 00:37:02.109 { 00:37:02.109 "name": "BaseBdev2", 00:37:02.109 "uuid": "e9d0f247-38b8-4e74-97f6-ba019cce4ae1", 00:37:02.109 "is_configured": true, 00:37:02.109 "data_offset": 0, 00:37:02.109 "data_size": 65536 00:37:02.109 }, 00:37:02.109 { 00:37:02.109 "name": "BaseBdev3", 00:37:02.109 "uuid": "f57a5f82-e132-4bd9-a4cc-74662c9e0f24", 00:37:02.109 "is_configured": true, 00:37:02.109 "data_offset": 0, 00:37:02.109 "data_size": 65536 00:37:02.109 }, 00:37:02.109 { 00:37:02.109 "name": "BaseBdev4", 00:37:02.109 "uuid": "76d838f5-d8d9-424c-bb4e-c475d521cdc1", 00:37:02.109 "is_configured": true, 00:37:02.109 "data_offset": 0, 00:37:02.109 "data_size": 65536 00:37:02.109 } 00:37:02.109 ] 00:37:02.109 } 00:37:02.109 } 00:37:02.109 }' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:37:02.109 BaseBdev2 00:37:02.109 BaseBdev3 00:37:02.109 BaseBdev4' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.109 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.368 [2024-11-26 17:34:02.889725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:02.368 [2024-11-26 17:34:02.889756] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:02.368 [2024-11-26 17:34:02.889837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:02.368 [2024-11-26 17:34:02.890139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:02.368 [2024-11-26 17:34:02.890152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73465 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73465 ']' 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73465 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73465 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:02.368 killing process with pid 73465 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73465' 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73465 00:37:02.368 [2024-11-26 17:34:02.934673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:02.368 17:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73465 00:37:02.937 [2024-11-26 17:34:03.337850] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:37:03.874 00:37:03.874 real 0m11.848s 00:37:03.874 user 0m18.777s 00:37:03.874 sys 0m2.117s 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.874 ************************************ 00:37:03.874 END TEST raid_state_function_test 00:37:03.874 ************************************ 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.874 17:34:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:37:03.874 17:34:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:03.874 17:34:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.874 17:34:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:03.874 ************************************ 00:37:03.874 START TEST raid_state_function_test_sb 00:37:03.874 ************************************ 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:03.874 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:04.133 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74132 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74132' 00:37:04.134 Process raid pid: 74132 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74132 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74132 ']' 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:04.134 17:34:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.134 [2024-11-26 17:34:04.657818] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:37:04.134 [2024-11-26 17:34:04.658038] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:04.392 [2024-11-26 17:34:04.834824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.392 [2024-11-26 17:34:04.951039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.652 [2024-11-26 17:34:05.154919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:04.652 [2024-11-26 17:34:05.155053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.911 [2024-11-26 17:34:05.522577] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:04.911 [2024-11-26 17:34:05.522631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:04.911 [2024-11-26 17:34:05.522644] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:04.911 [2024-11-26 17:34:05.522654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:04.911 [2024-11-26 17:34:05.522662] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:04.911 [2024-11-26 17:34:05.522671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:04.911 [2024-11-26 17:34:05.522679] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:04.911 [2024-11-26 17:34:05.522688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:04.911 "name": "Existed_Raid", 00:37:04.911 "uuid": "d7fed57f-6261-4a9c-8af1-38188b45134b", 00:37:04.911 "strip_size_kb": 0, 00:37:04.911 "state": "configuring", 00:37:04.911 "raid_level": "raid1", 00:37:04.911 "superblock": true, 00:37:04.911 "num_base_bdevs": 4, 00:37:04.911 "num_base_bdevs_discovered": 0, 00:37:04.911 "num_base_bdevs_operational": 4, 00:37:04.911 "base_bdevs_list": [ 00:37:04.911 { 00:37:04.911 "name": "BaseBdev1", 00:37:04.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:04.911 "is_configured": false, 00:37:04.911 "data_offset": 0, 00:37:04.911 "data_size": 0 00:37:04.911 }, 00:37:04.911 { 00:37:04.911 "name": "BaseBdev2", 00:37:04.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:04.911 "is_configured": false, 00:37:04.911 "data_offset": 0, 00:37:04.911 "data_size": 0 00:37:04.911 }, 00:37:04.911 { 00:37:04.911 "name": "BaseBdev3", 00:37:04.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:04.911 "is_configured": false, 00:37:04.911 "data_offset": 0, 00:37:04.911 "data_size": 0 00:37:04.911 }, 00:37:04.911 { 00:37:04.911 "name": "BaseBdev4", 00:37:04.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:04.911 "is_configured": false, 00:37:04.911 "data_offset": 0, 00:37:04.911 "data_size": 0 00:37:04.911 } 00:37:04.911 ] 00:37:04.911 }' 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:04.911 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.477 17:34:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:05.477 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.477 17:34:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.477 [2024-11-26 17:34:06.001690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:05.477 [2024-11-26 17:34:06.001734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:37:05.477 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.477 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:05.477 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.477 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.477 [2024-11-26 17:34:06.013689] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:05.477 [2024-11-26 17:34:06.013753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:05.477 [2024-11-26 17:34:06.013762] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:05.477 [2024-11-26 17:34:06.013771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:05.477 [2024-11-26 17:34:06.013778] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:05.477 [2024-11-26 17:34:06.013787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:05.477 [2024-11-26 17:34:06.013793] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:05.477 [2024-11-26 17:34:06.013802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:05.477 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.478 [2024-11-26 17:34:06.065115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:05.478 BaseBdev1 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.478 [ 00:37:05.478 { 00:37:05.478 "name": "BaseBdev1", 00:37:05.478 "aliases": [ 00:37:05.478 "044bae8a-b269-4dfc-a9c6-c82eb423aa4b" 00:37:05.478 ], 00:37:05.478 "product_name": "Malloc disk", 00:37:05.478 "block_size": 512, 00:37:05.478 "num_blocks": 65536, 00:37:05.478 "uuid": "044bae8a-b269-4dfc-a9c6-c82eb423aa4b", 00:37:05.478 "assigned_rate_limits": { 00:37:05.478 "rw_ios_per_sec": 0, 00:37:05.478 "rw_mbytes_per_sec": 0, 00:37:05.478 "r_mbytes_per_sec": 0, 00:37:05.478 "w_mbytes_per_sec": 0 00:37:05.478 }, 00:37:05.478 "claimed": true, 00:37:05.478 "claim_type": "exclusive_write", 00:37:05.478 "zoned": false, 00:37:05.478 "supported_io_types": { 00:37:05.478 "read": true, 00:37:05.478 "write": true, 00:37:05.478 "unmap": true, 00:37:05.478 "flush": true, 00:37:05.478 "reset": true, 00:37:05.478 "nvme_admin": false, 00:37:05.478 "nvme_io": false, 00:37:05.478 "nvme_io_md": false, 00:37:05.478 "write_zeroes": true, 00:37:05.478 "zcopy": true, 00:37:05.478 "get_zone_info": false, 00:37:05.478 "zone_management": false, 00:37:05.478 "zone_append": false, 00:37:05.478 "compare": false, 00:37:05.478 "compare_and_write": false, 00:37:05.478 "abort": true, 00:37:05.478 "seek_hole": false, 00:37:05.478 "seek_data": false, 00:37:05.478 "copy": true, 00:37:05.478 "nvme_iov_md": false 00:37:05.478 }, 00:37:05.478 "memory_domains": [ 00:37:05.478 { 00:37:05.478 "dma_device_id": "system", 00:37:05.478 "dma_device_type": 1 00:37:05.478 }, 00:37:05.478 { 00:37:05.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:05.478 "dma_device_type": 2 00:37:05.478 } 00:37:05.478 ], 00:37:05.478 "driver_specific": {} 00:37:05.478 } 00:37:05.478 ] 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:05.478 "name": "Existed_Raid", 00:37:05.478 "uuid": "f1442ef2-3f03-4447-92eb-d3036093b9ff", 00:37:05.478 "strip_size_kb": 0, 00:37:05.478 "state": "configuring", 00:37:05.478 "raid_level": "raid1", 00:37:05.478 "superblock": true, 00:37:05.478 "num_base_bdevs": 4, 00:37:05.478 "num_base_bdevs_discovered": 1, 00:37:05.478 "num_base_bdevs_operational": 4, 00:37:05.478 "base_bdevs_list": [ 00:37:05.478 { 00:37:05.478 "name": "BaseBdev1", 00:37:05.478 "uuid": "044bae8a-b269-4dfc-a9c6-c82eb423aa4b", 00:37:05.478 "is_configured": true, 00:37:05.478 "data_offset": 2048, 00:37:05.478 "data_size": 63488 00:37:05.478 }, 00:37:05.478 { 00:37:05.478 "name": "BaseBdev2", 00:37:05.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:05.478 "is_configured": false, 00:37:05.478 "data_offset": 0, 00:37:05.478 "data_size": 0 00:37:05.478 }, 00:37:05.478 { 00:37:05.478 "name": "BaseBdev3", 00:37:05.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:05.478 "is_configured": false, 00:37:05.478 "data_offset": 0, 00:37:05.478 "data_size": 0 00:37:05.478 }, 00:37:05.478 { 00:37:05.478 "name": "BaseBdev4", 00:37:05.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:05.478 "is_configured": false, 00:37:05.478 "data_offset": 0, 00:37:05.478 "data_size": 0 00:37:05.478 } 00:37:05.478 ] 00:37:05.478 }' 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:05.478 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.043 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:06.043 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.043 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.043 [2024-11-26 17:34:06.568353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:06.043 [2024-11-26 17:34:06.568419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:37:06.043 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.043 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:06.043 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.044 [2024-11-26 17:34:06.580410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:06.044 [2024-11-26 17:34:06.582584] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:06.044 [2024-11-26 17:34:06.582631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:06.044 [2024-11-26 17:34:06.582643] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:06.044 [2024-11-26 17:34:06.582656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:06.044 [2024-11-26 17:34:06.582664] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:06.044 [2024-11-26 17:34:06.582675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:06.044 "name": "Existed_Raid", 00:37:06.044 "uuid": "485a2112-1516-46c0-9c6f-a19eccae44b4", 00:37:06.044 "strip_size_kb": 0, 00:37:06.044 "state": "configuring", 00:37:06.044 "raid_level": "raid1", 00:37:06.044 "superblock": true, 00:37:06.044 "num_base_bdevs": 4, 00:37:06.044 "num_base_bdevs_discovered": 1, 00:37:06.044 "num_base_bdevs_operational": 4, 00:37:06.044 "base_bdevs_list": [ 00:37:06.044 { 00:37:06.044 "name": "BaseBdev1", 00:37:06.044 "uuid": "044bae8a-b269-4dfc-a9c6-c82eb423aa4b", 00:37:06.044 "is_configured": true, 00:37:06.044 "data_offset": 2048, 00:37:06.044 "data_size": 63488 00:37:06.044 }, 00:37:06.044 { 00:37:06.044 "name": "BaseBdev2", 00:37:06.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.044 "is_configured": false, 00:37:06.044 "data_offset": 0, 00:37:06.044 "data_size": 0 00:37:06.044 }, 00:37:06.044 { 00:37:06.044 "name": "BaseBdev3", 00:37:06.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.044 "is_configured": false, 00:37:06.044 "data_offset": 0, 00:37:06.044 "data_size": 0 00:37:06.044 }, 00:37:06.044 { 00:37:06.044 "name": "BaseBdev4", 00:37:06.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.044 "is_configured": false, 00:37:06.044 "data_offset": 0, 00:37:06.044 "data_size": 0 00:37:06.044 } 00:37:06.044 ] 00:37:06.044 }' 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:06.044 17:34:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.611 [2024-11-26 17:34:07.069432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:06.611 BaseBdev2 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.611 [ 00:37:06.611 { 00:37:06.611 "name": "BaseBdev2", 00:37:06.611 "aliases": [ 00:37:06.611 "14ec6568-f198-4ef7-b632-08283b2295da" 00:37:06.611 ], 00:37:06.611 "product_name": "Malloc disk", 00:37:06.611 "block_size": 512, 00:37:06.611 "num_blocks": 65536, 00:37:06.611 "uuid": "14ec6568-f198-4ef7-b632-08283b2295da", 00:37:06.611 "assigned_rate_limits": { 00:37:06.611 "rw_ios_per_sec": 0, 00:37:06.611 "rw_mbytes_per_sec": 0, 00:37:06.611 "r_mbytes_per_sec": 0, 00:37:06.611 "w_mbytes_per_sec": 0 00:37:06.611 }, 00:37:06.611 "claimed": true, 00:37:06.611 "claim_type": "exclusive_write", 00:37:06.611 "zoned": false, 00:37:06.611 "supported_io_types": { 00:37:06.611 "read": true, 00:37:06.611 "write": true, 00:37:06.611 "unmap": true, 00:37:06.611 "flush": true, 00:37:06.611 "reset": true, 00:37:06.611 "nvme_admin": false, 00:37:06.611 "nvme_io": false, 00:37:06.611 "nvme_io_md": false, 00:37:06.611 "write_zeroes": true, 00:37:06.611 "zcopy": true, 00:37:06.611 "get_zone_info": false, 00:37:06.611 "zone_management": false, 00:37:06.611 "zone_append": false, 00:37:06.611 "compare": false, 00:37:06.611 "compare_and_write": false, 00:37:06.611 "abort": true, 00:37:06.611 "seek_hole": false, 00:37:06.611 "seek_data": false, 00:37:06.611 "copy": true, 00:37:06.611 "nvme_iov_md": false 00:37:06.611 }, 00:37:06.611 "memory_domains": [ 00:37:06.611 { 00:37:06.611 "dma_device_id": "system", 00:37:06.611 "dma_device_type": 1 00:37:06.611 }, 00:37:06.611 { 00:37:06.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:06.611 "dma_device_type": 2 00:37:06.611 } 00:37:06.611 ], 00:37:06.611 "driver_specific": {} 00:37:06.611 } 00:37:06.611 ] 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.611 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:06.612 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.612 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:06.612 "name": "Existed_Raid", 00:37:06.612 "uuid": "485a2112-1516-46c0-9c6f-a19eccae44b4", 00:37:06.612 "strip_size_kb": 0, 00:37:06.612 "state": "configuring", 00:37:06.612 "raid_level": "raid1", 00:37:06.612 "superblock": true, 00:37:06.612 "num_base_bdevs": 4, 00:37:06.612 "num_base_bdevs_discovered": 2, 00:37:06.612 "num_base_bdevs_operational": 4, 00:37:06.612 "base_bdevs_list": [ 00:37:06.612 { 00:37:06.612 "name": "BaseBdev1", 00:37:06.612 "uuid": "044bae8a-b269-4dfc-a9c6-c82eb423aa4b", 00:37:06.612 "is_configured": true, 00:37:06.612 "data_offset": 2048, 00:37:06.612 "data_size": 63488 00:37:06.612 }, 00:37:06.612 { 00:37:06.612 "name": "BaseBdev2", 00:37:06.612 "uuid": "14ec6568-f198-4ef7-b632-08283b2295da", 00:37:06.612 "is_configured": true, 00:37:06.612 "data_offset": 2048, 00:37:06.612 "data_size": 63488 00:37:06.612 }, 00:37:06.612 { 00:37:06.612 "name": "BaseBdev3", 00:37:06.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.612 "is_configured": false, 00:37:06.612 "data_offset": 0, 00:37:06.612 "data_size": 0 00:37:06.612 }, 00:37:06.612 { 00:37:06.612 "name": "BaseBdev4", 00:37:06.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.612 "is_configured": false, 00:37:06.612 "data_offset": 0, 00:37:06.612 "data_size": 0 00:37:06.612 } 00:37:06.612 ] 00:37:06.612 }' 00:37:06.612 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:06.612 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.869 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:06.869 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.869 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.127 [2024-11-26 17:34:07.603262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:07.127 BaseBdev3 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.127 [ 00:37:07.127 { 00:37:07.127 "name": "BaseBdev3", 00:37:07.127 "aliases": [ 00:37:07.127 "cc213abf-535e-47c2-9806-f6557512e7a3" 00:37:07.127 ], 00:37:07.127 "product_name": "Malloc disk", 00:37:07.127 "block_size": 512, 00:37:07.127 "num_blocks": 65536, 00:37:07.127 "uuid": "cc213abf-535e-47c2-9806-f6557512e7a3", 00:37:07.127 "assigned_rate_limits": { 00:37:07.127 "rw_ios_per_sec": 0, 00:37:07.127 "rw_mbytes_per_sec": 0, 00:37:07.127 "r_mbytes_per_sec": 0, 00:37:07.127 "w_mbytes_per_sec": 0 00:37:07.127 }, 00:37:07.127 "claimed": true, 00:37:07.127 "claim_type": "exclusive_write", 00:37:07.127 "zoned": false, 00:37:07.127 "supported_io_types": { 00:37:07.127 "read": true, 00:37:07.127 "write": true, 00:37:07.127 "unmap": true, 00:37:07.127 "flush": true, 00:37:07.127 "reset": true, 00:37:07.127 "nvme_admin": false, 00:37:07.127 "nvme_io": false, 00:37:07.127 "nvme_io_md": false, 00:37:07.127 "write_zeroes": true, 00:37:07.127 "zcopy": true, 00:37:07.127 "get_zone_info": false, 00:37:07.127 "zone_management": false, 00:37:07.127 "zone_append": false, 00:37:07.127 "compare": false, 00:37:07.127 "compare_and_write": false, 00:37:07.127 "abort": true, 00:37:07.127 "seek_hole": false, 00:37:07.127 "seek_data": false, 00:37:07.127 "copy": true, 00:37:07.127 "nvme_iov_md": false 00:37:07.127 }, 00:37:07.127 "memory_domains": [ 00:37:07.127 { 00:37:07.127 "dma_device_id": "system", 00:37:07.127 "dma_device_type": 1 00:37:07.127 }, 00:37:07.127 { 00:37:07.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:07.127 "dma_device_type": 2 00:37:07.127 } 00:37:07.127 ], 00:37:07.127 "driver_specific": {} 00:37:07.127 } 00:37:07.127 ] 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.127 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.128 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:07.128 "name": "Existed_Raid", 00:37:07.128 "uuid": "485a2112-1516-46c0-9c6f-a19eccae44b4", 00:37:07.128 "strip_size_kb": 0, 00:37:07.128 "state": "configuring", 00:37:07.128 "raid_level": "raid1", 00:37:07.128 "superblock": true, 00:37:07.128 "num_base_bdevs": 4, 00:37:07.128 "num_base_bdevs_discovered": 3, 00:37:07.128 "num_base_bdevs_operational": 4, 00:37:07.128 "base_bdevs_list": [ 00:37:07.128 { 00:37:07.128 "name": "BaseBdev1", 00:37:07.128 "uuid": "044bae8a-b269-4dfc-a9c6-c82eb423aa4b", 00:37:07.128 "is_configured": true, 00:37:07.128 "data_offset": 2048, 00:37:07.128 "data_size": 63488 00:37:07.128 }, 00:37:07.128 { 00:37:07.128 "name": "BaseBdev2", 00:37:07.128 "uuid": "14ec6568-f198-4ef7-b632-08283b2295da", 00:37:07.128 "is_configured": true, 00:37:07.128 "data_offset": 2048, 00:37:07.128 "data_size": 63488 00:37:07.128 }, 00:37:07.128 { 00:37:07.128 "name": "BaseBdev3", 00:37:07.128 "uuid": "cc213abf-535e-47c2-9806-f6557512e7a3", 00:37:07.128 "is_configured": true, 00:37:07.128 "data_offset": 2048, 00:37:07.128 "data_size": 63488 00:37:07.128 }, 00:37:07.128 { 00:37:07.128 "name": "BaseBdev4", 00:37:07.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:07.128 "is_configured": false, 00:37:07.128 "data_offset": 0, 00:37:07.128 "data_size": 0 00:37:07.128 } 00:37:07.128 ] 00:37:07.128 }' 00:37:07.128 17:34:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:07.128 17:34:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.696 [2024-11-26 17:34:08.133036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:07.696 [2024-11-26 17:34:08.133483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:07.696 [2024-11-26 17:34:08.133573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:07.696 [2024-11-26 17:34:08.133916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:07.696 BaseBdev4 00:37:07.696 [2024-11-26 17:34:08.134145] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:07.696 [2024-11-26 17:34:08.134162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:37:07.696 [2024-11-26 17:34:08.134341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.696 [ 00:37:07.696 { 00:37:07.696 "name": "BaseBdev4", 00:37:07.696 "aliases": [ 00:37:07.696 "fcbb3e3b-8047-411a-b6df-959fd9d8a34e" 00:37:07.696 ], 00:37:07.696 "product_name": "Malloc disk", 00:37:07.696 "block_size": 512, 00:37:07.696 "num_blocks": 65536, 00:37:07.696 "uuid": "fcbb3e3b-8047-411a-b6df-959fd9d8a34e", 00:37:07.696 "assigned_rate_limits": { 00:37:07.696 "rw_ios_per_sec": 0, 00:37:07.696 "rw_mbytes_per_sec": 0, 00:37:07.696 "r_mbytes_per_sec": 0, 00:37:07.696 "w_mbytes_per_sec": 0 00:37:07.696 }, 00:37:07.696 "claimed": true, 00:37:07.696 "claim_type": "exclusive_write", 00:37:07.696 "zoned": false, 00:37:07.696 "supported_io_types": { 00:37:07.696 "read": true, 00:37:07.696 "write": true, 00:37:07.696 "unmap": true, 00:37:07.696 "flush": true, 00:37:07.696 "reset": true, 00:37:07.696 "nvme_admin": false, 00:37:07.696 "nvme_io": false, 00:37:07.696 "nvme_io_md": false, 00:37:07.696 "write_zeroes": true, 00:37:07.696 "zcopy": true, 00:37:07.696 "get_zone_info": false, 00:37:07.696 "zone_management": false, 00:37:07.696 "zone_append": false, 00:37:07.696 "compare": false, 00:37:07.696 "compare_and_write": false, 00:37:07.696 "abort": true, 00:37:07.696 "seek_hole": false, 00:37:07.696 "seek_data": false, 00:37:07.696 "copy": true, 00:37:07.696 "nvme_iov_md": false 00:37:07.696 }, 00:37:07.696 "memory_domains": [ 00:37:07.696 { 00:37:07.696 "dma_device_id": "system", 00:37:07.696 "dma_device_type": 1 00:37:07.696 }, 00:37:07.696 { 00:37:07.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:07.696 "dma_device_type": 2 00:37:07.696 } 00:37:07.696 ], 00:37:07.696 "driver_specific": {} 00:37:07.696 } 00:37:07.696 ] 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:07.696 "name": "Existed_Raid", 00:37:07.696 "uuid": "485a2112-1516-46c0-9c6f-a19eccae44b4", 00:37:07.696 "strip_size_kb": 0, 00:37:07.696 "state": "online", 00:37:07.696 "raid_level": "raid1", 00:37:07.696 "superblock": true, 00:37:07.696 "num_base_bdevs": 4, 00:37:07.696 "num_base_bdevs_discovered": 4, 00:37:07.696 "num_base_bdevs_operational": 4, 00:37:07.696 "base_bdevs_list": [ 00:37:07.696 { 00:37:07.696 "name": "BaseBdev1", 00:37:07.696 "uuid": "044bae8a-b269-4dfc-a9c6-c82eb423aa4b", 00:37:07.696 "is_configured": true, 00:37:07.696 "data_offset": 2048, 00:37:07.696 "data_size": 63488 00:37:07.696 }, 00:37:07.696 { 00:37:07.696 "name": "BaseBdev2", 00:37:07.696 "uuid": "14ec6568-f198-4ef7-b632-08283b2295da", 00:37:07.696 "is_configured": true, 00:37:07.696 "data_offset": 2048, 00:37:07.696 "data_size": 63488 00:37:07.696 }, 00:37:07.696 { 00:37:07.696 "name": "BaseBdev3", 00:37:07.696 "uuid": "cc213abf-535e-47c2-9806-f6557512e7a3", 00:37:07.696 "is_configured": true, 00:37:07.696 "data_offset": 2048, 00:37:07.696 "data_size": 63488 00:37:07.696 }, 00:37:07.696 { 00:37:07.696 "name": "BaseBdev4", 00:37:07.696 "uuid": "fcbb3e3b-8047-411a-b6df-959fd9d8a34e", 00:37:07.696 "is_configured": true, 00:37:07.696 "data_offset": 2048, 00:37:07.696 "data_size": 63488 00:37:07.696 } 00:37:07.696 ] 00:37:07.696 }' 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:07.696 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.954 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:37:07.955 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:07.955 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:07.955 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:07.955 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:37:07.955 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:07.955 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:07.955 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:07.955 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.955 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.955 [2024-11-26 17:34:08.620705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:07.955 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:08.214 "name": "Existed_Raid", 00:37:08.214 "aliases": [ 00:37:08.214 "485a2112-1516-46c0-9c6f-a19eccae44b4" 00:37:08.214 ], 00:37:08.214 "product_name": "Raid Volume", 00:37:08.214 "block_size": 512, 00:37:08.214 "num_blocks": 63488, 00:37:08.214 "uuid": "485a2112-1516-46c0-9c6f-a19eccae44b4", 00:37:08.214 "assigned_rate_limits": { 00:37:08.214 "rw_ios_per_sec": 0, 00:37:08.214 "rw_mbytes_per_sec": 0, 00:37:08.214 "r_mbytes_per_sec": 0, 00:37:08.214 "w_mbytes_per_sec": 0 00:37:08.214 }, 00:37:08.214 "claimed": false, 00:37:08.214 "zoned": false, 00:37:08.214 "supported_io_types": { 00:37:08.214 "read": true, 00:37:08.214 "write": true, 00:37:08.214 "unmap": false, 00:37:08.214 "flush": false, 00:37:08.214 "reset": true, 00:37:08.214 "nvme_admin": false, 00:37:08.214 "nvme_io": false, 00:37:08.214 "nvme_io_md": false, 00:37:08.214 "write_zeroes": true, 00:37:08.214 "zcopy": false, 00:37:08.214 "get_zone_info": false, 00:37:08.214 "zone_management": false, 00:37:08.214 "zone_append": false, 00:37:08.214 "compare": false, 00:37:08.214 "compare_and_write": false, 00:37:08.214 "abort": false, 00:37:08.214 "seek_hole": false, 00:37:08.214 "seek_data": false, 00:37:08.214 "copy": false, 00:37:08.214 "nvme_iov_md": false 00:37:08.214 }, 00:37:08.214 "memory_domains": [ 00:37:08.214 { 00:37:08.214 "dma_device_id": "system", 00:37:08.214 "dma_device_type": 1 00:37:08.214 }, 00:37:08.214 { 00:37:08.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:08.214 "dma_device_type": 2 00:37:08.214 }, 00:37:08.214 { 00:37:08.214 "dma_device_id": "system", 00:37:08.214 "dma_device_type": 1 00:37:08.214 }, 00:37:08.214 { 00:37:08.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:08.214 "dma_device_type": 2 00:37:08.214 }, 00:37:08.214 { 00:37:08.214 "dma_device_id": "system", 00:37:08.214 "dma_device_type": 1 00:37:08.214 }, 00:37:08.214 { 00:37:08.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:08.214 "dma_device_type": 2 00:37:08.214 }, 00:37:08.214 { 00:37:08.214 "dma_device_id": "system", 00:37:08.214 "dma_device_type": 1 00:37:08.214 }, 00:37:08.214 { 00:37:08.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:08.214 "dma_device_type": 2 00:37:08.214 } 00:37:08.214 ], 00:37:08.214 "driver_specific": { 00:37:08.214 "raid": { 00:37:08.214 "uuid": "485a2112-1516-46c0-9c6f-a19eccae44b4", 00:37:08.214 "strip_size_kb": 0, 00:37:08.214 "state": "online", 00:37:08.214 "raid_level": "raid1", 00:37:08.214 "superblock": true, 00:37:08.214 "num_base_bdevs": 4, 00:37:08.214 "num_base_bdevs_discovered": 4, 00:37:08.214 "num_base_bdevs_operational": 4, 00:37:08.214 "base_bdevs_list": [ 00:37:08.214 { 00:37:08.214 "name": "BaseBdev1", 00:37:08.214 "uuid": "044bae8a-b269-4dfc-a9c6-c82eb423aa4b", 00:37:08.214 "is_configured": true, 00:37:08.214 "data_offset": 2048, 00:37:08.214 "data_size": 63488 00:37:08.214 }, 00:37:08.214 { 00:37:08.214 "name": "BaseBdev2", 00:37:08.214 "uuid": "14ec6568-f198-4ef7-b632-08283b2295da", 00:37:08.214 "is_configured": true, 00:37:08.214 "data_offset": 2048, 00:37:08.214 "data_size": 63488 00:37:08.214 }, 00:37:08.214 { 00:37:08.214 "name": "BaseBdev3", 00:37:08.214 "uuid": "cc213abf-535e-47c2-9806-f6557512e7a3", 00:37:08.214 "is_configured": true, 00:37:08.214 "data_offset": 2048, 00:37:08.214 "data_size": 63488 00:37:08.214 }, 00:37:08.214 { 00:37:08.214 "name": "BaseBdev4", 00:37:08.214 "uuid": "fcbb3e3b-8047-411a-b6df-959fd9d8a34e", 00:37:08.214 "is_configured": true, 00:37:08.214 "data_offset": 2048, 00:37:08.214 "data_size": 63488 00:37:08.214 } 00:37:08.214 ] 00:37:08.214 } 00:37:08.214 } 00:37:08.214 }' 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:37:08.214 BaseBdev2 00:37:08.214 BaseBdev3 00:37:08.214 BaseBdev4' 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.214 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.473 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:08.473 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:08.473 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:08.473 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:37:08.474 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.474 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:08.474 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.474 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.474 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:08.474 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:08.474 17:34:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:08.474 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.474 17:34:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.474 [2024-11-26 17:34:08.971793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:08.474 "name": "Existed_Raid", 00:37:08.474 "uuid": "485a2112-1516-46c0-9c6f-a19eccae44b4", 00:37:08.474 "strip_size_kb": 0, 00:37:08.474 "state": "online", 00:37:08.474 "raid_level": "raid1", 00:37:08.474 "superblock": true, 00:37:08.474 "num_base_bdevs": 4, 00:37:08.474 "num_base_bdevs_discovered": 3, 00:37:08.474 "num_base_bdevs_operational": 3, 00:37:08.474 "base_bdevs_list": [ 00:37:08.474 { 00:37:08.474 "name": null, 00:37:08.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.474 "is_configured": false, 00:37:08.474 "data_offset": 0, 00:37:08.474 "data_size": 63488 00:37:08.474 }, 00:37:08.474 { 00:37:08.474 "name": "BaseBdev2", 00:37:08.474 "uuid": "14ec6568-f198-4ef7-b632-08283b2295da", 00:37:08.474 "is_configured": true, 00:37:08.474 "data_offset": 2048, 00:37:08.474 "data_size": 63488 00:37:08.474 }, 00:37:08.474 { 00:37:08.474 "name": "BaseBdev3", 00:37:08.474 "uuid": "cc213abf-535e-47c2-9806-f6557512e7a3", 00:37:08.474 "is_configured": true, 00:37:08.474 "data_offset": 2048, 00:37:08.474 "data_size": 63488 00:37:08.474 }, 00:37:08.474 { 00:37:08.474 "name": "BaseBdev4", 00:37:08.474 "uuid": "fcbb3e3b-8047-411a-b6df-959fd9d8a34e", 00:37:08.474 "is_configured": true, 00:37:08.474 "data_offset": 2048, 00:37:08.474 "data_size": 63488 00:37:08.474 } 00:37:08.474 ] 00:37:08.474 }' 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:08.474 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.050 [2024-11-26 17:34:09.587159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.050 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.050 [2024-11-26 17:34:09.742032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.309 17:34:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.309 [2024-11-26 17:34:09.891600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:37:09.309 [2024-11-26 17:34:09.891710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:09.309 [2024-11-26 17:34:10.001479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:09.309 [2024-11-26 17:34:10.001626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:09.309 [2024-11-26 17:34:10.001683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:37:09.309 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.567 BaseBdev2 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.567 [ 00:37:09.567 { 00:37:09.567 "name": "BaseBdev2", 00:37:09.567 "aliases": [ 00:37:09.567 "f2914cf6-6dea-41c0-8710-c95d99a8c548" 00:37:09.567 ], 00:37:09.567 "product_name": "Malloc disk", 00:37:09.567 "block_size": 512, 00:37:09.567 "num_blocks": 65536, 00:37:09.567 "uuid": "f2914cf6-6dea-41c0-8710-c95d99a8c548", 00:37:09.567 "assigned_rate_limits": { 00:37:09.567 "rw_ios_per_sec": 0, 00:37:09.567 "rw_mbytes_per_sec": 0, 00:37:09.567 "r_mbytes_per_sec": 0, 00:37:09.567 "w_mbytes_per_sec": 0 00:37:09.567 }, 00:37:09.567 "claimed": false, 00:37:09.567 "zoned": false, 00:37:09.567 "supported_io_types": { 00:37:09.567 "read": true, 00:37:09.567 "write": true, 00:37:09.567 "unmap": true, 00:37:09.567 "flush": true, 00:37:09.567 "reset": true, 00:37:09.567 "nvme_admin": false, 00:37:09.567 "nvme_io": false, 00:37:09.567 "nvme_io_md": false, 00:37:09.567 "write_zeroes": true, 00:37:09.567 "zcopy": true, 00:37:09.567 "get_zone_info": false, 00:37:09.567 "zone_management": false, 00:37:09.567 "zone_append": false, 00:37:09.567 "compare": false, 00:37:09.567 "compare_and_write": false, 00:37:09.567 "abort": true, 00:37:09.567 "seek_hole": false, 00:37:09.567 "seek_data": false, 00:37:09.567 "copy": true, 00:37:09.567 "nvme_iov_md": false 00:37:09.567 }, 00:37:09.567 "memory_domains": [ 00:37:09.567 { 00:37:09.567 "dma_device_id": "system", 00:37:09.567 "dma_device_type": 1 00:37:09.567 }, 00:37:09.567 { 00:37:09.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:09.567 "dma_device_type": 2 00:37:09.567 } 00:37:09.567 ], 00:37:09.567 "driver_specific": {} 00:37:09.567 } 00:37:09.567 ] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.567 BaseBdev3 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.567 [ 00:37:09.567 { 00:37:09.567 "name": "BaseBdev3", 00:37:09.567 "aliases": [ 00:37:09.567 "bee73aea-005b-42af-aaf7-f48b816d65a7" 00:37:09.567 ], 00:37:09.567 "product_name": "Malloc disk", 00:37:09.567 "block_size": 512, 00:37:09.567 "num_blocks": 65536, 00:37:09.567 "uuid": "bee73aea-005b-42af-aaf7-f48b816d65a7", 00:37:09.567 "assigned_rate_limits": { 00:37:09.567 "rw_ios_per_sec": 0, 00:37:09.567 "rw_mbytes_per_sec": 0, 00:37:09.567 "r_mbytes_per_sec": 0, 00:37:09.567 "w_mbytes_per_sec": 0 00:37:09.567 }, 00:37:09.567 "claimed": false, 00:37:09.567 "zoned": false, 00:37:09.567 "supported_io_types": { 00:37:09.567 "read": true, 00:37:09.567 "write": true, 00:37:09.567 "unmap": true, 00:37:09.567 "flush": true, 00:37:09.567 "reset": true, 00:37:09.567 "nvme_admin": false, 00:37:09.567 "nvme_io": false, 00:37:09.567 "nvme_io_md": false, 00:37:09.567 "write_zeroes": true, 00:37:09.567 "zcopy": true, 00:37:09.567 "get_zone_info": false, 00:37:09.567 "zone_management": false, 00:37:09.567 "zone_append": false, 00:37:09.567 "compare": false, 00:37:09.567 "compare_and_write": false, 00:37:09.567 "abort": true, 00:37:09.567 "seek_hole": false, 00:37:09.567 "seek_data": false, 00:37:09.567 "copy": true, 00:37:09.567 "nvme_iov_md": false 00:37:09.567 }, 00:37:09.567 "memory_domains": [ 00:37:09.567 { 00:37:09.567 "dma_device_id": "system", 00:37:09.567 "dma_device_type": 1 00:37:09.567 }, 00:37:09.567 { 00:37:09.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:09.567 "dma_device_type": 2 00:37:09.567 } 00:37:09.567 ], 00:37:09.567 "driver_specific": {} 00:37:09.567 } 00:37:09.567 ] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:09.567 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.568 BaseBdev4 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.568 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.826 [ 00:37:09.826 { 00:37:09.826 "name": "BaseBdev4", 00:37:09.826 "aliases": [ 00:37:09.826 "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87" 00:37:09.826 ], 00:37:09.826 "product_name": "Malloc disk", 00:37:09.826 "block_size": 512, 00:37:09.826 "num_blocks": 65536, 00:37:09.826 "uuid": "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87", 00:37:09.826 "assigned_rate_limits": { 00:37:09.826 "rw_ios_per_sec": 0, 00:37:09.826 "rw_mbytes_per_sec": 0, 00:37:09.826 "r_mbytes_per_sec": 0, 00:37:09.826 "w_mbytes_per_sec": 0 00:37:09.826 }, 00:37:09.826 "claimed": false, 00:37:09.826 "zoned": false, 00:37:09.826 "supported_io_types": { 00:37:09.826 "read": true, 00:37:09.826 "write": true, 00:37:09.826 "unmap": true, 00:37:09.826 "flush": true, 00:37:09.826 "reset": true, 00:37:09.826 "nvme_admin": false, 00:37:09.826 "nvme_io": false, 00:37:09.826 "nvme_io_md": false, 00:37:09.826 "write_zeroes": true, 00:37:09.826 "zcopy": true, 00:37:09.826 "get_zone_info": false, 00:37:09.826 "zone_management": false, 00:37:09.826 "zone_append": false, 00:37:09.826 "compare": false, 00:37:09.826 "compare_and_write": false, 00:37:09.826 "abort": true, 00:37:09.826 "seek_hole": false, 00:37:09.826 "seek_data": false, 00:37:09.826 "copy": true, 00:37:09.826 "nvme_iov_md": false 00:37:09.826 }, 00:37:09.826 "memory_domains": [ 00:37:09.826 { 00:37:09.826 "dma_device_id": "system", 00:37:09.826 "dma_device_type": 1 00:37:09.826 }, 00:37:09.826 { 00:37:09.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:09.826 "dma_device_type": 2 00:37:09.826 } 00:37:09.826 ], 00:37:09.826 "driver_specific": {} 00:37:09.826 } 00:37:09.826 ] 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.826 [2024-11-26 17:34:10.306239] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:09.826 [2024-11-26 17:34:10.306293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:09.826 [2024-11-26 17:34:10.306318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:09.826 [2024-11-26 17:34:10.308382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:09.826 [2024-11-26 17:34:10.308440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:09.826 "name": "Existed_Raid", 00:37:09.826 "uuid": "c99837bc-5265-4bc0-bd86-a0dcf304b864", 00:37:09.826 "strip_size_kb": 0, 00:37:09.826 "state": "configuring", 00:37:09.826 "raid_level": "raid1", 00:37:09.826 "superblock": true, 00:37:09.826 "num_base_bdevs": 4, 00:37:09.826 "num_base_bdevs_discovered": 3, 00:37:09.826 "num_base_bdevs_operational": 4, 00:37:09.826 "base_bdevs_list": [ 00:37:09.826 { 00:37:09.826 "name": "BaseBdev1", 00:37:09.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:09.826 "is_configured": false, 00:37:09.826 "data_offset": 0, 00:37:09.826 "data_size": 0 00:37:09.826 }, 00:37:09.826 { 00:37:09.826 "name": "BaseBdev2", 00:37:09.826 "uuid": "f2914cf6-6dea-41c0-8710-c95d99a8c548", 00:37:09.826 "is_configured": true, 00:37:09.826 "data_offset": 2048, 00:37:09.826 "data_size": 63488 00:37:09.826 }, 00:37:09.826 { 00:37:09.826 "name": "BaseBdev3", 00:37:09.826 "uuid": "bee73aea-005b-42af-aaf7-f48b816d65a7", 00:37:09.826 "is_configured": true, 00:37:09.826 "data_offset": 2048, 00:37:09.826 "data_size": 63488 00:37:09.826 }, 00:37:09.826 { 00:37:09.826 "name": "BaseBdev4", 00:37:09.826 "uuid": "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87", 00:37:09.826 "is_configured": true, 00:37:09.826 "data_offset": 2048, 00:37:09.826 "data_size": 63488 00:37:09.826 } 00:37:09.826 ] 00:37:09.826 }' 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:09.826 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.086 [2024-11-26 17:34:10.761460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.086 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.345 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.345 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:10.345 "name": "Existed_Raid", 00:37:10.345 "uuid": "c99837bc-5265-4bc0-bd86-a0dcf304b864", 00:37:10.345 "strip_size_kb": 0, 00:37:10.345 "state": "configuring", 00:37:10.345 "raid_level": "raid1", 00:37:10.345 "superblock": true, 00:37:10.345 "num_base_bdevs": 4, 00:37:10.345 "num_base_bdevs_discovered": 2, 00:37:10.345 "num_base_bdevs_operational": 4, 00:37:10.345 "base_bdevs_list": [ 00:37:10.345 { 00:37:10.345 "name": "BaseBdev1", 00:37:10.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:10.345 "is_configured": false, 00:37:10.345 "data_offset": 0, 00:37:10.345 "data_size": 0 00:37:10.345 }, 00:37:10.345 { 00:37:10.345 "name": null, 00:37:10.345 "uuid": "f2914cf6-6dea-41c0-8710-c95d99a8c548", 00:37:10.345 "is_configured": false, 00:37:10.345 "data_offset": 0, 00:37:10.345 "data_size": 63488 00:37:10.345 }, 00:37:10.345 { 00:37:10.345 "name": "BaseBdev3", 00:37:10.345 "uuid": "bee73aea-005b-42af-aaf7-f48b816d65a7", 00:37:10.345 "is_configured": true, 00:37:10.345 "data_offset": 2048, 00:37:10.345 "data_size": 63488 00:37:10.345 }, 00:37:10.345 { 00:37:10.345 "name": "BaseBdev4", 00:37:10.345 "uuid": "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87", 00:37:10.345 "is_configured": true, 00:37:10.345 "data_offset": 2048, 00:37:10.345 "data_size": 63488 00:37:10.345 } 00:37:10.345 ] 00:37:10.345 }' 00:37:10.345 17:34:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:10.345 17:34:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.603 [2024-11-26 17:34:11.267381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:10.603 BaseBdev1 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.603 [ 00:37:10.603 { 00:37:10.603 "name": "BaseBdev1", 00:37:10.603 "aliases": [ 00:37:10.603 "79462a1e-ed91-43b9-8d5c-6e8c497bd192" 00:37:10.603 ], 00:37:10.603 "product_name": "Malloc disk", 00:37:10.603 "block_size": 512, 00:37:10.603 "num_blocks": 65536, 00:37:10.603 "uuid": "79462a1e-ed91-43b9-8d5c-6e8c497bd192", 00:37:10.603 "assigned_rate_limits": { 00:37:10.603 "rw_ios_per_sec": 0, 00:37:10.603 "rw_mbytes_per_sec": 0, 00:37:10.603 "r_mbytes_per_sec": 0, 00:37:10.603 "w_mbytes_per_sec": 0 00:37:10.603 }, 00:37:10.603 "claimed": true, 00:37:10.603 "claim_type": "exclusive_write", 00:37:10.603 "zoned": false, 00:37:10.603 "supported_io_types": { 00:37:10.603 "read": true, 00:37:10.603 "write": true, 00:37:10.603 "unmap": true, 00:37:10.603 "flush": true, 00:37:10.603 "reset": true, 00:37:10.603 "nvme_admin": false, 00:37:10.603 "nvme_io": false, 00:37:10.603 "nvme_io_md": false, 00:37:10.603 "write_zeroes": true, 00:37:10.603 "zcopy": true, 00:37:10.603 "get_zone_info": false, 00:37:10.603 "zone_management": false, 00:37:10.603 "zone_append": false, 00:37:10.603 "compare": false, 00:37:10.603 "compare_and_write": false, 00:37:10.603 "abort": true, 00:37:10.603 "seek_hole": false, 00:37:10.603 "seek_data": false, 00:37:10.603 "copy": true, 00:37:10.603 "nvme_iov_md": false 00:37:10.603 }, 00:37:10.603 "memory_domains": [ 00:37:10.603 { 00:37:10.603 "dma_device_id": "system", 00:37:10.603 "dma_device_type": 1 00:37:10.603 }, 00:37:10.603 { 00:37:10.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:10.603 "dma_device_type": 2 00:37:10.603 } 00:37:10.603 ], 00:37:10.603 "driver_specific": {} 00:37:10.603 } 00:37:10.603 ] 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.603 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.860 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.860 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:10.860 "name": "Existed_Raid", 00:37:10.860 "uuid": "c99837bc-5265-4bc0-bd86-a0dcf304b864", 00:37:10.860 "strip_size_kb": 0, 00:37:10.860 "state": "configuring", 00:37:10.860 "raid_level": "raid1", 00:37:10.860 "superblock": true, 00:37:10.860 "num_base_bdevs": 4, 00:37:10.860 "num_base_bdevs_discovered": 3, 00:37:10.860 "num_base_bdevs_operational": 4, 00:37:10.860 "base_bdevs_list": [ 00:37:10.860 { 00:37:10.860 "name": "BaseBdev1", 00:37:10.860 "uuid": "79462a1e-ed91-43b9-8d5c-6e8c497bd192", 00:37:10.860 "is_configured": true, 00:37:10.860 "data_offset": 2048, 00:37:10.860 "data_size": 63488 00:37:10.860 }, 00:37:10.860 { 00:37:10.860 "name": null, 00:37:10.860 "uuid": "f2914cf6-6dea-41c0-8710-c95d99a8c548", 00:37:10.860 "is_configured": false, 00:37:10.860 "data_offset": 0, 00:37:10.860 "data_size": 63488 00:37:10.860 }, 00:37:10.860 { 00:37:10.860 "name": "BaseBdev3", 00:37:10.860 "uuid": "bee73aea-005b-42af-aaf7-f48b816d65a7", 00:37:10.860 "is_configured": true, 00:37:10.860 "data_offset": 2048, 00:37:10.860 "data_size": 63488 00:37:10.860 }, 00:37:10.860 { 00:37:10.860 "name": "BaseBdev4", 00:37:10.860 "uuid": "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87", 00:37:10.860 "is_configured": true, 00:37:10.860 "data_offset": 2048, 00:37:10.860 "data_size": 63488 00:37:10.860 } 00:37:10.860 ] 00:37:10.860 }' 00:37:10.860 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:10.860 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.130 [2024-11-26 17:34:11.754633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.130 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:11.131 "name": "Existed_Raid", 00:37:11.131 "uuid": "c99837bc-5265-4bc0-bd86-a0dcf304b864", 00:37:11.131 "strip_size_kb": 0, 00:37:11.131 "state": "configuring", 00:37:11.131 "raid_level": "raid1", 00:37:11.131 "superblock": true, 00:37:11.131 "num_base_bdevs": 4, 00:37:11.131 "num_base_bdevs_discovered": 2, 00:37:11.131 "num_base_bdevs_operational": 4, 00:37:11.131 "base_bdevs_list": [ 00:37:11.131 { 00:37:11.131 "name": "BaseBdev1", 00:37:11.131 "uuid": "79462a1e-ed91-43b9-8d5c-6e8c497bd192", 00:37:11.131 "is_configured": true, 00:37:11.131 "data_offset": 2048, 00:37:11.131 "data_size": 63488 00:37:11.131 }, 00:37:11.131 { 00:37:11.131 "name": null, 00:37:11.131 "uuid": "f2914cf6-6dea-41c0-8710-c95d99a8c548", 00:37:11.131 "is_configured": false, 00:37:11.131 "data_offset": 0, 00:37:11.131 "data_size": 63488 00:37:11.131 }, 00:37:11.131 { 00:37:11.131 "name": null, 00:37:11.131 "uuid": "bee73aea-005b-42af-aaf7-f48b816d65a7", 00:37:11.131 "is_configured": false, 00:37:11.131 "data_offset": 0, 00:37:11.131 "data_size": 63488 00:37:11.131 }, 00:37:11.131 { 00:37:11.131 "name": "BaseBdev4", 00:37:11.131 "uuid": "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87", 00:37:11.131 "is_configured": true, 00:37:11.131 "data_offset": 2048, 00:37:11.131 "data_size": 63488 00:37:11.131 } 00:37:11.131 ] 00:37:11.131 }' 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:11.131 17:34:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.698 [2024-11-26 17:34:12.245784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:11.698 "name": "Existed_Raid", 00:37:11.698 "uuid": "c99837bc-5265-4bc0-bd86-a0dcf304b864", 00:37:11.698 "strip_size_kb": 0, 00:37:11.698 "state": "configuring", 00:37:11.698 "raid_level": "raid1", 00:37:11.698 "superblock": true, 00:37:11.698 "num_base_bdevs": 4, 00:37:11.698 "num_base_bdevs_discovered": 3, 00:37:11.698 "num_base_bdevs_operational": 4, 00:37:11.698 "base_bdevs_list": [ 00:37:11.698 { 00:37:11.698 "name": "BaseBdev1", 00:37:11.698 "uuid": "79462a1e-ed91-43b9-8d5c-6e8c497bd192", 00:37:11.698 "is_configured": true, 00:37:11.698 "data_offset": 2048, 00:37:11.698 "data_size": 63488 00:37:11.698 }, 00:37:11.698 { 00:37:11.698 "name": null, 00:37:11.698 "uuid": "f2914cf6-6dea-41c0-8710-c95d99a8c548", 00:37:11.698 "is_configured": false, 00:37:11.698 "data_offset": 0, 00:37:11.698 "data_size": 63488 00:37:11.698 }, 00:37:11.698 { 00:37:11.698 "name": "BaseBdev3", 00:37:11.698 "uuid": "bee73aea-005b-42af-aaf7-f48b816d65a7", 00:37:11.698 "is_configured": true, 00:37:11.698 "data_offset": 2048, 00:37:11.698 "data_size": 63488 00:37:11.698 }, 00:37:11.698 { 00:37:11.698 "name": "BaseBdev4", 00:37:11.698 "uuid": "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87", 00:37:11.698 "is_configured": true, 00:37:11.698 "data_offset": 2048, 00:37:11.698 "data_size": 63488 00:37:11.698 } 00:37:11.698 ] 00:37:11.698 }' 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:11.698 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.266 [2024-11-26 17:34:12.729028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:12.266 "name": "Existed_Raid", 00:37:12.266 "uuid": "c99837bc-5265-4bc0-bd86-a0dcf304b864", 00:37:12.266 "strip_size_kb": 0, 00:37:12.266 "state": "configuring", 00:37:12.266 "raid_level": "raid1", 00:37:12.266 "superblock": true, 00:37:12.266 "num_base_bdevs": 4, 00:37:12.266 "num_base_bdevs_discovered": 2, 00:37:12.266 "num_base_bdevs_operational": 4, 00:37:12.266 "base_bdevs_list": [ 00:37:12.266 { 00:37:12.266 "name": null, 00:37:12.266 "uuid": "79462a1e-ed91-43b9-8d5c-6e8c497bd192", 00:37:12.266 "is_configured": false, 00:37:12.266 "data_offset": 0, 00:37:12.266 "data_size": 63488 00:37:12.266 }, 00:37:12.266 { 00:37:12.266 "name": null, 00:37:12.266 "uuid": "f2914cf6-6dea-41c0-8710-c95d99a8c548", 00:37:12.266 "is_configured": false, 00:37:12.266 "data_offset": 0, 00:37:12.266 "data_size": 63488 00:37:12.266 }, 00:37:12.266 { 00:37:12.266 "name": "BaseBdev3", 00:37:12.266 "uuid": "bee73aea-005b-42af-aaf7-f48b816d65a7", 00:37:12.266 "is_configured": true, 00:37:12.266 "data_offset": 2048, 00:37:12.266 "data_size": 63488 00:37:12.266 }, 00:37:12.266 { 00:37:12.266 "name": "BaseBdev4", 00:37:12.266 "uuid": "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87", 00:37:12.266 "is_configured": true, 00:37:12.266 "data_offset": 2048, 00:37:12.266 "data_size": 63488 00:37:12.266 } 00:37:12.266 ] 00:37:12.266 }' 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:12.266 17:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.833 [2024-11-26 17:34:13.343629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.833 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:12.834 "name": "Existed_Raid", 00:37:12.834 "uuid": "c99837bc-5265-4bc0-bd86-a0dcf304b864", 00:37:12.834 "strip_size_kb": 0, 00:37:12.834 "state": "configuring", 00:37:12.834 "raid_level": "raid1", 00:37:12.834 "superblock": true, 00:37:12.834 "num_base_bdevs": 4, 00:37:12.834 "num_base_bdevs_discovered": 3, 00:37:12.834 "num_base_bdevs_operational": 4, 00:37:12.834 "base_bdevs_list": [ 00:37:12.834 { 00:37:12.834 "name": null, 00:37:12.834 "uuid": "79462a1e-ed91-43b9-8d5c-6e8c497bd192", 00:37:12.834 "is_configured": false, 00:37:12.834 "data_offset": 0, 00:37:12.834 "data_size": 63488 00:37:12.834 }, 00:37:12.834 { 00:37:12.834 "name": "BaseBdev2", 00:37:12.834 "uuid": "f2914cf6-6dea-41c0-8710-c95d99a8c548", 00:37:12.834 "is_configured": true, 00:37:12.834 "data_offset": 2048, 00:37:12.834 "data_size": 63488 00:37:12.834 }, 00:37:12.834 { 00:37:12.834 "name": "BaseBdev3", 00:37:12.834 "uuid": "bee73aea-005b-42af-aaf7-f48b816d65a7", 00:37:12.834 "is_configured": true, 00:37:12.834 "data_offset": 2048, 00:37:12.834 "data_size": 63488 00:37:12.834 }, 00:37:12.834 { 00:37:12.834 "name": "BaseBdev4", 00:37:12.834 "uuid": "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87", 00:37:12.834 "is_configured": true, 00:37:12.834 "data_offset": 2048, 00:37:12.834 "data_size": 63488 00:37:12.834 } 00:37:12.834 ] 00:37:12.834 }' 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:12.834 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.092 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:13.092 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.092 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.092 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:13.352 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.352 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:37:13.352 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:13.352 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:13.352 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.352 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.352 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.352 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 79462a1e-ed91-43b9-8d5c-6e8c497bd192 00:37:13.352 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.352 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.352 [2024-11-26 17:34:13.898014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:13.352 [2024-11-26 17:34:13.898276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:13.352 [2024-11-26 17:34:13.898292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:13.352 [2024-11-26 17:34:13.898586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:37:13.352 [2024-11-26 17:34:13.898787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:13.352 [2024-11-26 17:34:13.898802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:37:13.352 [2024-11-26 17:34:13.898972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:13.352 NewBaseBdev 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.353 [ 00:37:13.353 { 00:37:13.353 "name": "NewBaseBdev", 00:37:13.353 "aliases": [ 00:37:13.353 "79462a1e-ed91-43b9-8d5c-6e8c497bd192" 00:37:13.353 ], 00:37:13.353 "product_name": "Malloc disk", 00:37:13.353 "block_size": 512, 00:37:13.353 "num_blocks": 65536, 00:37:13.353 "uuid": "79462a1e-ed91-43b9-8d5c-6e8c497bd192", 00:37:13.353 "assigned_rate_limits": { 00:37:13.353 "rw_ios_per_sec": 0, 00:37:13.353 "rw_mbytes_per_sec": 0, 00:37:13.353 "r_mbytes_per_sec": 0, 00:37:13.353 "w_mbytes_per_sec": 0 00:37:13.353 }, 00:37:13.353 "claimed": true, 00:37:13.353 "claim_type": "exclusive_write", 00:37:13.353 "zoned": false, 00:37:13.353 "supported_io_types": { 00:37:13.353 "read": true, 00:37:13.353 "write": true, 00:37:13.353 "unmap": true, 00:37:13.353 "flush": true, 00:37:13.353 "reset": true, 00:37:13.353 "nvme_admin": false, 00:37:13.353 "nvme_io": false, 00:37:13.353 "nvme_io_md": false, 00:37:13.353 "write_zeroes": true, 00:37:13.353 "zcopy": true, 00:37:13.353 "get_zone_info": false, 00:37:13.353 "zone_management": false, 00:37:13.353 "zone_append": false, 00:37:13.353 "compare": false, 00:37:13.353 "compare_and_write": false, 00:37:13.353 "abort": true, 00:37:13.353 "seek_hole": false, 00:37:13.353 "seek_data": false, 00:37:13.353 "copy": true, 00:37:13.353 "nvme_iov_md": false 00:37:13.353 }, 00:37:13.353 "memory_domains": [ 00:37:13.353 { 00:37:13.353 "dma_device_id": "system", 00:37:13.353 "dma_device_type": 1 00:37:13.353 }, 00:37:13.353 { 00:37:13.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:13.353 "dma_device_type": 2 00:37:13.353 } 00:37:13.353 ], 00:37:13.353 "driver_specific": {} 00:37:13.353 } 00:37:13.353 ] 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:13.353 "name": "Existed_Raid", 00:37:13.353 "uuid": "c99837bc-5265-4bc0-bd86-a0dcf304b864", 00:37:13.353 "strip_size_kb": 0, 00:37:13.353 "state": "online", 00:37:13.353 "raid_level": "raid1", 00:37:13.353 "superblock": true, 00:37:13.353 "num_base_bdevs": 4, 00:37:13.353 "num_base_bdevs_discovered": 4, 00:37:13.353 "num_base_bdevs_operational": 4, 00:37:13.353 "base_bdevs_list": [ 00:37:13.353 { 00:37:13.353 "name": "NewBaseBdev", 00:37:13.353 "uuid": "79462a1e-ed91-43b9-8d5c-6e8c497bd192", 00:37:13.353 "is_configured": true, 00:37:13.353 "data_offset": 2048, 00:37:13.353 "data_size": 63488 00:37:13.353 }, 00:37:13.353 { 00:37:13.353 "name": "BaseBdev2", 00:37:13.353 "uuid": "f2914cf6-6dea-41c0-8710-c95d99a8c548", 00:37:13.353 "is_configured": true, 00:37:13.353 "data_offset": 2048, 00:37:13.353 "data_size": 63488 00:37:13.353 }, 00:37:13.353 { 00:37:13.353 "name": "BaseBdev3", 00:37:13.353 "uuid": "bee73aea-005b-42af-aaf7-f48b816d65a7", 00:37:13.353 "is_configured": true, 00:37:13.353 "data_offset": 2048, 00:37:13.353 "data_size": 63488 00:37:13.353 }, 00:37:13.353 { 00:37:13.353 "name": "BaseBdev4", 00:37:13.353 "uuid": "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87", 00:37:13.353 "is_configured": true, 00:37:13.353 "data_offset": 2048, 00:37:13.353 "data_size": 63488 00:37:13.353 } 00:37:13.353 ] 00:37:13.353 }' 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:13.353 17:34:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.922 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:37:13.922 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:13.922 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:13.922 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:13.922 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:37:13.922 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:13.923 [2024-11-26 17:34:14.369666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:13.923 "name": "Existed_Raid", 00:37:13.923 "aliases": [ 00:37:13.923 "c99837bc-5265-4bc0-bd86-a0dcf304b864" 00:37:13.923 ], 00:37:13.923 "product_name": "Raid Volume", 00:37:13.923 "block_size": 512, 00:37:13.923 "num_blocks": 63488, 00:37:13.923 "uuid": "c99837bc-5265-4bc0-bd86-a0dcf304b864", 00:37:13.923 "assigned_rate_limits": { 00:37:13.923 "rw_ios_per_sec": 0, 00:37:13.923 "rw_mbytes_per_sec": 0, 00:37:13.923 "r_mbytes_per_sec": 0, 00:37:13.923 "w_mbytes_per_sec": 0 00:37:13.923 }, 00:37:13.923 "claimed": false, 00:37:13.923 "zoned": false, 00:37:13.923 "supported_io_types": { 00:37:13.923 "read": true, 00:37:13.923 "write": true, 00:37:13.923 "unmap": false, 00:37:13.923 "flush": false, 00:37:13.923 "reset": true, 00:37:13.923 "nvme_admin": false, 00:37:13.923 "nvme_io": false, 00:37:13.923 "nvme_io_md": false, 00:37:13.923 "write_zeroes": true, 00:37:13.923 "zcopy": false, 00:37:13.923 "get_zone_info": false, 00:37:13.923 "zone_management": false, 00:37:13.923 "zone_append": false, 00:37:13.923 "compare": false, 00:37:13.923 "compare_and_write": false, 00:37:13.923 "abort": false, 00:37:13.923 "seek_hole": false, 00:37:13.923 "seek_data": false, 00:37:13.923 "copy": false, 00:37:13.923 "nvme_iov_md": false 00:37:13.923 }, 00:37:13.923 "memory_domains": [ 00:37:13.923 { 00:37:13.923 "dma_device_id": "system", 00:37:13.923 "dma_device_type": 1 00:37:13.923 }, 00:37:13.923 { 00:37:13.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:13.923 "dma_device_type": 2 00:37:13.923 }, 00:37:13.923 { 00:37:13.923 "dma_device_id": "system", 00:37:13.923 "dma_device_type": 1 00:37:13.923 }, 00:37:13.923 { 00:37:13.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:13.923 "dma_device_type": 2 00:37:13.923 }, 00:37:13.923 { 00:37:13.923 "dma_device_id": "system", 00:37:13.923 "dma_device_type": 1 00:37:13.923 }, 00:37:13.923 { 00:37:13.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:13.923 "dma_device_type": 2 00:37:13.923 }, 00:37:13.923 { 00:37:13.923 "dma_device_id": "system", 00:37:13.923 "dma_device_type": 1 00:37:13.923 }, 00:37:13.923 { 00:37:13.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:13.923 "dma_device_type": 2 00:37:13.923 } 00:37:13.923 ], 00:37:13.923 "driver_specific": { 00:37:13.923 "raid": { 00:37:13.923 "uuid": "c99837bc-5265-4bc0-bd86-a0dcf304b864", 00:37:13.923 "strip_size_kb": 0, 00:37:13.923 "state": "online", 00:37:13.923 "raid_level": "raid1", 00:37:13.923 "superblock": true, 00:37:13.923 "num_base_bdevs": 4, 00:37:13.923 "num_base_bdevs_discovered": 4, 00:37:13.923 "num_base_bdevs_operational": 4, 00:37:13.923 "base_bdevs_list": [ 00:37:13.923 { 00:37:13.923 "name": "NewBaseBdev", 00:37:13.923 "uuid": "79462a1e-ed91-43b9-8d5c-6e8c497bd192", 00:37:13.923 "is_configured": true, 00:37:13.923 "data_offset": 2048, 00:37:13.923 "data_size": 63488 00:37:13.923 }, 00:37:13.923 { 00:37:13.923 "name": "BaseBdev2", 00:37:13.923 "uuid": "f2914cf6-6dea-41c0-8710-c95d99a8c548", 00:37:13.923 "is_configured": true, 00:37:13.923 "data_offset": 2048, 00:37:13.923 "data_size": 63488 00:37:13.923 }, 00:37:13.923 { 00:37:13.923 "name": "BaseBdev3", 00:37:13.923 "uuid": "bee73aea-005b-42af-aaf7-f48b816d65a7", 00:37:13.923 "is_configured": true, 00:37:13.923 "data_offset": 2048, 00:37:13.923 "data_size": 63488 00:37:13.923 }, 00:37:13.923 { 00:37:13.923 "name": "BaseBdev4", 00:37:13.923 "uuid": "3bdb0bb9-a2bd-426e-abd5-7790a6b95a87", 00:37:13.923 "is_configured": true, 00:37:13.923 "data_offset": 2048, 00:37:13.923 "data_size": 63488 00:37:13.923 } 00:37:13.923 ] 00:37:13.923 } 00:37:13.923 } 00:37:13.923 }' 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:37:13.923 BaseBdev2 00:37:13.923 BaseBdev3 00:37:13.923 BaseBdev4' 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.923 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.182 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:14.182 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:14.182 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:14.182 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:14.182 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.183 [2024-11-26 17:34:14.724685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:14.183 [2024-11-26 17:34:14.724719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:14.183 [2024-11-26 17:34:14.724811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:14.183 [2024-11-26 17:34:14.725154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:14.183 [2024-11-26 17:34:14.725174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74132 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74132 ']' 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74132 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74132 00:37:14.183 killing process with pid 74132 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74132' 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74132 00:37:14.183 [2024-11-26 17:34:14.773204] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:14.183 17:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74132 00:37:14.749 [2024-11-26 17:34:15.208835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:16.128 17:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:37:16.128 00:37:16.128 real 0m11.939s 00:37:16.128 user 0m18.864s 00:37:16.128 sys 0m1.982s 00:37:16.128 17:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:16.128 17:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:16.128 ************************************ 00:37:16.128 END TEST raid_state_function_test_sb 00:37:16.128 ************************************ 00:37:16.128 17:34:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:37:16.128 17:34:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:16.128 17:34:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:16.128 17:34:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:16.128 ************************************ 00:37:16.128 START TEST raid_superblock_test 00:37:16.128 ************************************ 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74808 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74808 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74808 ']' 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.128 17:34:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.128 [2024-11-26 17:34:16.670515] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:37:16.128 [2024-11-26 17:34:16.670672] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74808 ] 00:37:16.387 [2024-11-26 17:34:16.851141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.387 [2024-11-26 17:34:16.985582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.645 [2024-11-26 17:34:17.218568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:16.645 [2024-11-26 17:34:17.218607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.904 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.165 malloc1 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.165 [2024-11-26 17:34:17.646750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:17.165 [2024-11-26 17:34:17.646811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:17.165 [2024-11-26 17:34:17.646832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:17.165 [2024-11-26 17:34:17.646842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:17.165 [2024-11-26 17:34:17.649107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:17.165 [2024-11-26 17:34:17.649148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:17.165 pt1 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.165 malloc2 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.165 [2024-11-26 17:34:17.698787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:17.165 [2024-11-26 17:34:17.698852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:17.165 [2024-11-26 17:34:17.698882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:17.165 [2024-11-26 17:34:17.698892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:17.165 [2024-11-26 17:34:17.701259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:17.165 [2024-11-26 17:34:17.701305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:17.165 pt2 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.165 malloc3 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.165 [2024-11-26 17:34:17.774764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:17.165 [2024-11-26 17:34:17.774845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:17.165 [2024-11-26 17:34:17.774871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:17.165 [2024-11-26 17:34:17.774882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:17.165 [2024-11-26 17:34:17.777307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:17.165 [2024-11-26 17:34:17.777351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:17.165 pt3 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.165 malloc4 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.165 [2024-11-26 17:34:17.835084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:17.165 [2024-11-26 17:34:17.835178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:17.165 [2024-11-26 17:34:17.835205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:17.165 [2024-11-26 17:34:17.835216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:17.165 [2024-11-26 17:34:17.837652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:17.165 [2024-11-26 17:34:17.837693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:17.165 pt4 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.165 [2024-11-26 17:34:17.847117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:17.165 [2024-11-26 17:34:17.849211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:17.165 [2024-11-26 17:34:17.849294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:17.165 [2024-11-26 17:34:17.849369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:17.165 [2024-11-26 17:34:17.849607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:17.165 [2024-11-26 17:34:17.849635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:17.165 [2024-11-26 17:34:17.849950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:17.165 [2024-11-26 17:34:17.850174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:17.165 [2024-11-26 17:34:17.850199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:17.165 [2024-11-26 17:34:17.850396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:17.165 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:17.426 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.426 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.426 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.426 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.426 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.426 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:17.426 "name": "raid_bdev1", 00:37:17.426 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:17.426 "strip_size_kb": 0, 00:37:17.426 "state": "online", 00:37:17.426 "raid_level": "raid1", 00:37:17.426 "superblock": true, 00:37:17.426 "num_base_bdevs": 4, 00:37:17.426 "num_base_bdevs_discovered": 4, 00:37:17.426 "num_base_bdevs_operational": 4, 00:37:17.426 "base_bdevs_list": [ 00:37:17.426 { 00:37:17.426 "name": "pt1", 00:37:17.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:17.426 "is_configured": true, 00:37:17.426 "data_offset": 2048, 00:37:17.426 "data_size": 63488 00:37:17.426 }, 00:37:17.426 { 00:37:17.426 "name": "pt2", 00:37:17.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:17.426 "is_configured": true, 00:37:17.426 "data_offset": 2048, 00:37:17.426 "data_size": 63488 00:37:17.426 }, 00:37:17.426 { 00:37:17.426 "name": "pt3", 00:37:17.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:17.426 "is_configured": true, 00:37:17.426 "data_offset": 2048, 00:37:17.426 "data_size": 63488 00:37:17.426 }, 00:37:17.426 { 00:37:17.426 "name": "pt4", 00:37:17.426 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:17.426 "is_configured": true, 00:37:17.426 "data_offset": 2048, 00:37:17.426 "data_size": 63488 00:37:17.426 } 00:37:17.426 ] 00:37:17.426 }' 00:37:17.426 17:34:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:17.426 17:34:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:17.685 [2024-11-26 17:34:18.306688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.685 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:17.685 "name": "raid_bdev1", 00:37:17.685 "aliases": [ 00:37:17.685 "1765eed2-5831-4113-975a-d91b7d899215" 00:37:17.685 ], 00:37:17.685 "product_name": "Raid Volume", 00:37:17.685 "block_size": 512, 00:37:17.685 "num_blocks": 63488, 00:37:17.685 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:17.685 "assigned_rate_limits": { 00:37:17.685 "rw_ios_per_sec": 0, 00:37:17.685 "rw_mbytes_per_sec": 0, 00:37:17.685 "r_mbytes_per_sec": 0, 00:37:17.685 "w_mbytes_per_sec": 0 00:37:17.685 }, 00:37:17.685 "claimed": false, 00:37:17.685 "zoned": false, 00:37:17.685 "supported_io_types": { 00:37:17.685 "read": true, 00:37:17.685 "write": true, 00:37:17.685 "unmap": false, 00:37:17.685 "flush": false, 00:37:17.685 "reset": true, 00:37:17.685 "nvme_admin": false, 00:37:17.685 "nvme_io": false, 00:37:17.685 "nvme_io_md": false, 00:37:17.685 "write_zeroes": true, 00:37:17.685 "zcopy": false, 00:37:17.685 "get_zone_info": false, 00:37:17.685 "zone_management": false, 00:37:17.685 "zone_append": false, 00:37:17.685 "compare": false, 00:37:17.685 "compare_and_write": false, 00:37:17.685 "abort": false, 00:37:17.685 "seek_hole": false, 00:37:17.685 "seek_data": false, 00:37:17.685 "copy": false, 00:37:17.685 "nvme_iov_md": false 00:37:17.685 }, 00:37:17.685 "memory_domains": [ 00:37:17.685 { 00:37:17.685 "dma_device_id": "system", 00:37:17.685 "dma_device_type": 1 00:37:17.685 }, 00:37:17.685 { 00:37:17.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.685 "dma_device_type": 2 00:37:17.685 }, 00:37:17.685 { 00:37:17.685 "dma_device_id": "system", 00:37:17.685 "dma_device_type": 1 00:37:17.685 }, 00:37:17.685 { 00:37:17.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.685 "dma_device_type": 2 00:37:17.685 }, 00:37:17.685 { 00:37:17.685 "dma_device_id": "system", 00:37:17.685 "dma_device_type": 1 00:37:17.685 }, 00:37:17.685 { 00:37:17.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.685 "dma_device_type": 2 00:37:17.685 }, 00:37:17.685 { 00:37:17.685 "dma_device_id": "system", 00:37:17.685 "dma_device_type": 1 00:37:17.685 }, 00:37:17.685 { 00:37:17.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.685 "dma_device_type": 2 00:37:17.685 } 00:37:17.685 ], 00:37:17.685 "driver_specific": { 00:37:17.685 "raid": { 00:37:17.685 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:17.685 "strip_size_kb": 0, 00:37:17.685 "state": "online", 00:37:17.685 "raid_level": "raid1", 00:37:17.685 "superblock": true, 00:37:17.685 "num_base_bdevs": 4, 00:37:17.685 "num_base_bdevs_discovered": 4, 00:37:17.685 "num_base_bdevs_operational": 4, 00:37:17.685 "base_bdevs_list": [ 00:37:17.685 { 00:37:17.685 "name": "pt1", 00:37:17.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:17.685 "is_configured": true, 00:37:17.685 "data_offset": 2048, 00:37:17.685 "data_size": 63488 00:37:17.685 }, 00:37:17.685 { 00:37:17.685 "name": "pt2", 00:37:17.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:17.685 "is_configured": true, 00:37:17.685 "data_offset": 2048, 00:37:17.685 "data_size": 63488 00:37:17.685 }, 00:37:17.685 { 00:37:17.685 "name": "pt3", 00:37:17.686 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:17.686 "is_configured": true, 00:37:17.686 "data_offset": 2048, 00:37:17.686 "data_size": 63488 00:37:17.686 }, 00:37:17.686 { 00:37:17.686 "name": "pt4", 00:37:17.686 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:17.686 "is_configured": true, 00:37:17.686 "data_offset": 2048, 00:37:17.686 "data_size": 63488 00:37:17.686 } 00:37:17.686 ] 00:37:17.686 } 00:37:17.686 } 00:37:17.686 }' 00:37:17.686 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:17.946 pt2 00:37:17.946 pt3 00:37:17.946 pt4' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.946 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.946 [2024-11-26 17:34:18.622113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1765eed2-5831-4113-975a-d91b7d899215 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1765eed2-5831-4113-975a-d91b7d899215 ']' 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.206 [2024-11-26 17:34:18.669677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:18.206 [2024-11-26 17:34:18.669711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:18.206 [2024-11-26 17:34:18.669807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:18.206 [2024-11-26 17:34:18.669900] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:18.206 [2024-11-26 17:34:18.669934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.206 [2024-11-26 17:34:18.805500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:18.206 [2024-11-26 17:34:18.807693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:18.206 [2024-11-26 17:34:18.807756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:37:18.206 [2024-11-26 17:34:18.807799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:37:18.206 [2024-11-26 17:34:18.807860] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:18.206 [2024-11-26 17:34:18.807921] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:18.206 [2024-11-26 17:34:18.807943] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:37:18.206 [2024-11-26 17:34:18.807967] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:37:18.206 [2024-11-26 17:34:18.807983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:18.206 [2024-11-26 17:34:18.807995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:37:18.206 request: 00:37:18.206 { 00:37:18.206 "name": "raid_bdev1", 00:37:18.206 "raid_level": "raid1", 00:37:18.206 "base_bdevs": [ 00:37:18.206 "malloc1", 00:37:18.206 "malloc2", 00:37:18.206 "malloc3", 00:37:18.206 "malloc4" 00:37:18.206 ], 00:37:18.206 "superblock": false, 00:37:18.206 "method": "bdev_raid_create", 00:37:18.206 "req_id": 1 00:37:18.206 } 00:37:18.206 Got JSON-RPC error response 00:37:18.206 response: 00:37:18.206 { 00:37:18.206 "code": -17, 00:37:18.206 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:18.206 } 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.206 [2024-11-26 17:34:18.865366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:18.206 [2024-11-26 17:34:18.865446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:18.206 [2024-11-26 17:34:18.865466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:18.206 [2024-11-26 17:34:18.865478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:18.206 [2024-11-26 17:34:18.868009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:18.206 [2024-11-26 17:34:18.868061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:18.206 [2024-11-26 17:34:18.868157] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:18.206 [2024-11-26 17:34:18.868232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:18.206 pt1 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:18.206 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:18.207 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:18.207 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:18.207 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.207 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.207 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.207 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.466 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:18.466 "name": "raid_bdev1", 00:37:18.466 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:18.466 "strip_size_kb": 0, 00:37:18.466 "state": "configuring", 00:37:18.466 "raid_level": "raid1", 00:37:18.466 "superblock": true, 00:37:18.466 "num_base_bdevs": 4, 00:37:18.466 "num_base_bdevs_discovered": 1, 00:37:18.466 "num_base_bdevs_operational": 4, 00:37:18.466 "base_bdevs_list": [ 00:37:18.466 { 00:37:18.466 "name": "pt1", 00:37:18.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:18.466 "is_configured": true, 00:37:18.466 "data_offset": 2048, 00:37:18.466 "data_size": 63488 00:37:18.466 }, 00:37:18.466 { 00:37:18.466 "name": null, 00:37:18.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:18.466 "is_configured": false, 00:37:18.466 "data_offset": 2048, 00:37:18.466 "data_size": 63488 00:37:18.466 }, 00:37:18.466 { 00:37:18.466 "name": null, 00:37:18.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:18.466 "is_configured": false, 00:37:18.466 "data_offset": 2048, 00:37:18.466 "data_size": 63488 00:37:18.466 }, 00:37:18.466 { 00:37:18.466 "name": null, 00:37:18.466 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:18.466 "is_configured": false, 00:37:18.466 "data_offset": 2048, 00:37:18.466 "data_size": 63488 00:37:18.466 } 00:37:18.466 ] 00:37:18.466 }' 00:37:18.466 17:34:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:18.466 17:34:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.723 [2024-11-26 17:34:19.340580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:18.723 [2024-11-26 17:34:19.340658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:18.723 [2024-11-26 17:34:19.340682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:37:18.723 [2024-11-26 17:34:19.340694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:18.723 [2024-11-26 17:34:19.341154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:18.723 [2024-11-26 17:34:19.341190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:18.723 [2024-11-26 17:34:19.341280] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:18.723 [2024-11-26 17:34:19.341313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:18.723 pt2 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.723 [2024-11-26 17:34:19.348575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:18.723 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:18.724 "name": "raid_bdev1", 00:37:18.724 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:18.724 "strip_size_kb": 0, 00:37:18.724 "state": "configuring", 00:37:18.724 "raid_level": "raid1", 00:37:18.724 "superblock": true, 00:37:18.724 "num_base_bdevs": 4, 00:37:18.724 "num_base_bdevs_discovered": 1, 00:37:18.724 "num_base_bdevs_operational": 4, 00:37:18.724 "base_bdevs_list": [ 00:37:18.724 { 00:37:18.724 "name": "pt1", 00:37:18.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:18.724 "is_configured": true, 00:37:18.724 "data_offset": 2048, 00:37:18.724 "data_size": 63488 00:37:18.724 }, 00:37:18.724 { 00:37:18.724 "name": null, 00:37:18.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:18.724 "is_configured": false, 00:37:18.724 "data_offset": 0, 00:37:18.724 "data_size": 63488 00:37:18.724 }, 00:37:18.724 { 00:37:18.724 "name": null, 00:37:18.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:18.724 "is_configured": false, 00:37:18.724 "data_offset": 2048, 00:37:18.724 "data_size": 63488 00:37:18.724 }, 00:37:18.724 { 00:37:18.724 "name": null, 00:37:18.724 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:18.724 "is_configured": false, 00:37:18.724 "data_offset": 2048, 00:37:18.724 "data_size": 63488 00:37:18.724 } 00:37:18.724 ] 00:37:18.724 }' 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:18.724 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.292 [2024-11-26 17:34:19.811790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:19.292 [2024-11-26 17:34:19.811943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:19.292 [2024-11-26 17:34:19.811976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:37:19.292 [2024-11-26 17:34:19.811989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:19.292 [2024-11-26 17:34:19.812597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:19.292 [2024-11-26 17:34:19.812621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:19.292 [2024-11-26 17:34:19.812721] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:19.292 [2024-11-26 17:34:19.812748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:19.292 pt2 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.292 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.292 [2024-11-26 17:34:19.823751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:19.293 [2024-11-26 17:34:19.823823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:19.293 [2024-11-26 17:34:19.823845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:37:19.293 [2024-11-26 17:34:19.823854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:19.293 [2024-11-26 17:34:19.824316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:19.293 [2024-11-26 17:34:19.824337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:19.293 [2024-11-26 17:34:19.824425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:19.293 [2024-11-26 17:34:19.824448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:19.293 pt3 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.293 [2024-11-26 17:34:19.835697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:19.293 [2024-11-26 17:34:19.835746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:19.293 [2024-11-26 17:34:19.835766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:37:19.293 [2024-11-26 17:34:19.835777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:19.293 [2024-11-26 17:34:19.836223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:19.293 [2024-11-26 17:34:19.836257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:19.293 [2024-11-26 17:34:19.836338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:19.293 [2024-11-26 17:34:19.836368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:19.293 [2024-11-26 17:34:19.836563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:19.293 [2024-11-26 17:34:19.836578] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:19.293 [2024-11-26 17:34:19.836851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:19.293 [2024-11-26 17:34:19.837027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:19.293 [2024-11-26 17:34:19.837041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:37:19.293 [2024-11-26 17:34:19.837210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:19.293 pt4 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:19.293 "name": "raid_bdev1", 00:37:19.293 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:19.293 "strip_size_kb": 0, 00:37:19.293 "state": "online", 00:37:19.293 "raid_level": "raid1", 00:37:19.293 "superblock": true, 00:37:19.293 "num_base_bdevs": 4, 00:37:19.293 "num_base_bdevs_discovered": 4, 00:37:19.293 "num_base_bdevs_operational": 4, 00:37:19.293 "base_bdevs_list": [ 00:37:19.293 { 00:37:19.293 "name": "pt1", 00:37:19.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:19.293 "is_configured": true, 00:37:19.293 "data_offset": 2048, 00:37:19.293 "data_size": 63488 00:37:19.293 }, 00:37:19.293 { 00:37:19.293 "name": "pt2", 00:37:19.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:19.293 "is_configured": true, 00:37:19.293 "data_offset": 2048, 00:37:19.293 "data_size": 63488 00:37:19.293 }, 00:37:19.293 { 00:37:19.293 "name": "pt3", 00:37:19.293 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:19.293 "is_configured": true, 00:37:19.293 "data_offset": 2048, 00:37:19.293 "data_size": 63488 00:37:19.293 }, 00:37:19.293 { 00:37:19.293 "name": "pt4", 00:37:19.293 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:19.293 "is_configured": true, 00:37:19.293 "data_offset": 2048, 00:37:19.293 "data_size": 63488 00:37:19.293 } 00:37:19.293 ] 00:37:19.293 }' 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:19.293 17:34:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:19.862 [2024-11-26 17:34:20.351304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.862 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:19.862 "name": "raid_bdev1", 00:37:19.862 "aliases": [ 00:37:19.862 "1765eed2-5831-4113-975a-d91b7d899215" 00:37:19.862 ], 00:37:19.862 "product_name": "Raid Volume", 00:37:19.862 "block_size": 512, 00:37:19.862 "num_blocks": 63488, 00:37:19.862 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:19.862 "assigned_rate_limits": { 00:37:19.862 "rw_ios_per_sec": 0, 00:37:19.862 "rw_mbytes_per_sec": 0, 00:37:19.862 "r_mbytes_per_sec": 0, 00:37:19.862 "w_mbytes_per_sec": 0 00:37:19.862 }, 00:37:19.862 "claimed": false, 00:37:19.862 "zoned": false, 00:37:19.862 "supported_io_types": { 00:37:19.862 "read": true, 00:37:19.862 "write": true, 00:37:19.862 "unmap": false, 00:37:19.862 "flush": false, 00:37:19.862 "reset": true, 00:37:19.862 "nvme_admin": false, 00:37:19.862 "nvme_io": false, 00:37:19.862 "nvme_io_md": false, 00:37:19.862 "write_zeroes": true, 00:37:19.862 "zcopy": false, 00:37:19.862 "get_zone_info": false, 00:37:19.862 "zone_management": false, 00:37:19.862 "zone_append": false, 00:37:19.862 "compare": false, 00:37:19.862 "compare_and_write": false, 00:37:19.862 "abort": false, 00:37:19.862 "seek_hole": false, 00:37:19.862 "seek_data": false, 00:37:19.862 "copy": false, 00:37:19.862 "nvme_iov_md": false 00:37:19.862 }, 00:37:19.862 "memory_domains": [ 00:37:19.862 { 00:37:19.862 "dma_device_id": "system", 00:37:19.862 "dma_device_type": 1 00:37:19.862 }, 00:37:19.862 { 00:37:19.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.862 "dma_device_type": 2 00:37:19.862 }, 00:37:19.862 { 00:37:19.862 "dma_device_id": "system", 00:37:19.862 "dma_device_type": 1 00:37:19.862 }, 00:37:19.862 { 00:37:19.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.862 "dma_device_type": 2 00:37:19.862 }, 00:37:19.862 { 00:37:19.862 "dma_device_id": "system", 00:37:19.862 "dma_device_type": 1 00:37:19.862 }, 00:37:19.862 { 00:37:19.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.862 "dma_device_type": 2 00:37:19.862 }, 00:37:19.862 { 00:37:19.862 "dma_device_id": "system", 00:37:19.862 "dma_device_type": 1 00:37:19.862 }, 00:37:19.862 { 00:37:19.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.862 "dma_device_type": 2 00:37:19.862 } 00:37:19.862 ], 00:37:19.862 "driver_specific": { 00:37:19.862 "raid": { 00:37:19.862 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:19.863 "strip_size_kb": 0, 00:37:19.863 "state": "online", 00:37:19.863 "raid_level": "raid1", 00:37:19.863 "superblock": true, 00:37:19.863 "num_base_bdevs": 4, 00:37:19.863 "num_base_bdevs_discovered": 4, 00:37:19.863 "num_base_bdevs_operational": 4, 00:37:19.863 "base_bdevs_list": [ 00:37:19.863 { 00:37:19.863 "name": "pt1", 00:37:19.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:19.863 "is_configured": true, 00:37:19.863 "data_offset": 2048, 00:37:19.863 "data_size": 63488 00:37:19.863 }, 00:37:19.863 { 00:37:19.863 "name": "pt2", 00:37:19.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:19.863 "is_configured": true, 00:37:19.863 "data_offset": 2048, 00:37:19.863 "data_size": 63488 00:37:19.863 }, 00:37:19.863 { 00:37:19.863 "name": "pt3", 00:37:19.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:19.863 "is_configured": true, 00:37:19.863 "data_offset": 2048, 00:37:19.863 "data_size": 63488 00:37:19.863 }, 00:37:19.863 { 00:37:19.863 "name": "pt4", 00:37:19.863 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:19.863 "is_configured": true, 00:37:19.863 "data_offset": 2048, 00:37:19.863 "data_size": 63488 00:37:19.863 } 00:37:19.863 ] 00:37:19.863 } 00:37:19.863 } 00:37:19.863 }' 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:19.863 pt2 00:37:19.863 pt3 00:37:19.863 pt4' 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.863 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:37:20.162 [2024-11-26 17:34:20.710701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1765eed2-5831-4113-975a-d91b7d899215 '!=' 1765eed2-5831-4113-975a-d91b7d899215 ']' 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.162 [2024-11-26 17:34:20.762307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:20.162 "name": "raid_bdev1", 00:37:20.162 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:20.162 "strip_size_kb": 0, 00:37:20.162 "state": "online", 00:37:20.162 "raid_level": "raid1", 00:37:20.162 "superblock": true, 00:37:20.162 "num_base_bdevs": 4, 00:37:20.162 "num_base_bdevs_discovered": 3, 00:37:20.162 "num_base_bdevs_operational": 3, 00:37:20.162 "base_bdevs_list": [ 00:37:20.162 { 00:37:20.162 "name": null, 00:37:20.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.162 "is_configured": false, 00:37:20.162 "data_offset": 0, 00:37:20.162 "data_size": 63488 00:37:20.162 }, 00:37:20.162 { 00:37:20.162 "name": "pt2", 00:37:20.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:20.162 "is_configured": true, 00:37:20.162 "data_offset": 2048, 00:37:20.162 "data_size": 63488 00:37:20.162 }, 00:37:20.162 { 00:37:20.162 "name": "pt3", 00:37:20.162 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:20.162 "is_configured": true, 00:37:20.162 "data_offset": 2048, 00:37:20.162 "data_size": 63488 00:37:20.162 }, 00:37:20.162 { 00:37:20.162 "name": "pt4", 00:37:20.162 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:20.162 "is_configured": true, 00:37:20.162 "data_offset": 2048, 00:37:20.162 "data_size": 63488 00:37:20.162 } 00:37:20.162 ] 00:37:20.162 }' 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:20.162 17:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.732 [2024-11-26 17:34:21.253499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:20.732 [2024-11-26 17:34:21.253601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:20.732 [2024-11-26 17:34:21.253716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:20.732 [2024-11-26 17:34:21.253827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:20.732 [2024-11-26 17:34:21.253883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:37:20.732 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.733 [2024-11-26 17:34:21.345340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:20.733 [2024-11-26 17:34:21.345495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:20.733 [2024-11-26 17:34:21.345522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:37:20.733 [2024-11-26 17:34:21.345549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:20.733 [2024-11-26 17:34:21.348036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:20.733 [2024-11-26 17:34:21.348078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:20.733 [2024-11-26 17:34:21.348180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:20.733 [2024-11-26 17:34:21.348253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:20.733 pt2 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:20.733 "name": "raid_bdev1", 00:37:20.733 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:20.733 "strip_size_kb": 0, 00:37:20.733 "state": "configuring", 00:37:20.733 "raid_level": "raid1", 00:37:20.733 "superblock": true, 00:37:20.733 "num_base_bdevs": 4, 00:37:20.733 "num_base_bdevs_discovered": 1, 00:37:20.733 "num_base_bdevs_operational": 3, 00:37:20.733 "base_bdevs_list": [ 00:37:20.733 { 00:37:20.733 "name": null, 00:37:20.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.733 "is_configured": false, 00:37:20.733 "data_offset": 2048, 00:37:20.733 "data_size": 63488 00:37:20.733 }, 00:37:20.733 { 00:37:20.733 "name": "pt2", 00:37:20.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:20.733 "is_configured": true, 00:37:20.733 "data_offset": 2048, 00:37:20.733 "data_size": 63488 00:37:20.733 }, 00:37:20.733 { 00:37:20.733 "name": null, 00:37:20.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:20.733 "is_configured": false, 00:37:20.733 "data_offset": 2048, 00:37:20.733 "data_size": 63488 00:37:20.733 }, 00:37:20.733 { 00:37:20.733 "name": null, 00:37:20.733 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:20.733 "is_configured": false, 00:37:20.733 "data_offset": 2048, 00:37:20.733 "data_size": 63488 00:37:20.733 } 00:37:20.733 ] 00:37:20.733 }' 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:20.733 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.302 [2024-11-26 17:34:21.800660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:21.302 [2024-11-26 17:34:21.800741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:21.302 [2024-11-26 17:34:21.800766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:37:21.302 [2024-11-26 17:34:21.800778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:21.302 [2024-11-26 17:34:21.801307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:21.302 [2024-11-26 17:34:21.801328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:21.302 [2024-11-26 17:34:21.801427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:21.302 [2024-11-26 17:34:21.801452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:21.302 pt3 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:21.302 "name": "raid_bdev1", 00:37:21.302 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:21.302 "strip_size_kb": 0, 00:37:21.302 "state": "configuring", 00:37:21.302 "raid_level": "raid1", 00:37:21.302 "superblock": true, 00:37:21.302 "num_base_bdevs": 4, 00:37:21.302 "num_base_bdevs_discovered": 2, 00:37:21.302 "num_base_bdevs_operational": 3, 00:37:21.302 "base_bdevs_list": [ 00:37:21.302 { 00:37:21.302 "name": null, 00:37:21.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.302 "is_configured": false, 00:37:21.302 "data_offset": 2048, 00:37:21.302 "data_size": 63488 00:37:21.302 }, 00:37:21.302 { 00:37:21.302 "name": "pt2", 00:37:21.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:21.302 "is_configured": true, 00:37:21.302 "data_offset": 2048, 00:37:21.302 "data_size": 63488 00:37:21.302 }, 00:37:21.302 { 00:37:21.302 "name": "pt3", 00:37:21.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:21.302 "is_configured": true, 00:37:21.302 "data_offset": 2048, 00:37:21.302 "data_size": 63488 00:37:21.302 }, 00:37:21.302 { 00:37:21.302 "name": null, 00:37:21.302 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:21.302 "is_configured": false, 00:37:21.302 "data_offset": 2048, 00:37:21.302 "data_size": 63488 00:37:21.302 } 00:37:21.302 ] 00:37:21.302 }' 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:21.302 17:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.872 [2024-11-26 17:34:22.287847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:21.872 [2024-11-26 17:34:22.287999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:21.872 [2024-11-26 17:34:22.288062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:37:21.872 [2024-11-26 17:34:22.288121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:21.872 [2024-11-26 17:34:22.288752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:21.872 [2024-11-26 17:34:22.288829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:21.872 [2024-11-26 17:34:22.288966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:21.872 [2024-11-26 17:34:22.289027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:21.872 [2024-11-26 17:34:22.289212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:21.872 [2024-11-26 17:34:22.289256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:21.872 [2024-11-26 17:34:22.289568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:37:21.872 [2024-11-26 17:34:22.289799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:21.872 [2024-11-26 17:34:22.289852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:37:21.872 [2024-11-26 17:34:22.290057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:21.872 pt4 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:21.872 "name": "raid_bdev1", 00:37:21.872 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:21.872 "strip_size_kb": 0, 00:37:21.872 "state": "online", 00:37:21.872 "raid_level": "raid1", 00:37:21.872 "superblock": true, 00:37:21.872 "num_base_bdevs": 4, 00:37:21.872 "num_base_bdevs_discovered": 3, 00:37:21.872 "num_base_bdevs_operational": 3, 00:37:21.872 "base_bdevs_list": [ 00:37:21.872 { 00:37:21.872 "name": null, 00:37:21.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.872 "is_configured": false, 00:37:21.872 "data_offset": 2048, 00:37:21.872 "data_size": 63488 00:37:21.872 }, 00:37:21.872 { 00:37:21.872 "name": "pt2", 00:37:21.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:21.872 "is_configured": true, 00:37:21.872 "data_offset": 2048, 00:37:21.872 "data_size": 63488 00:37:21.872 }, 00:37:21.872 { 00:37:21.872 "name": "pt3", 00:37:21.872 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:21.872 "is_configured": true, 00:37:21.872 "data_offset": 2048, 00:37:21.872 "data_size": 63488 00:37:21.872 }, 00:37:21.872 { 00:37:21.872 "name": "pt4", 00:37:21.872 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:21.872 "is_configured": true, 00:37:21.872 "data_offset": 2048, 00:37:21.872 "data_size": 63488 00:37:21.872 } 00:37:21.872 ] 00:37:21.872 }' 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:21.872 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.132 [2024-11-26 17:34:22.711060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:22.132 [2024-11-26 17:34:22.711096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:22.132 [2024-11-26 17:34:22.711186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:22.132 [2024-11-26 17:34:22.711266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:22.132 [2024-11-26 17:34:22.711280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.132 [2024-11-26 17:34:22.786930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:22.132 [2024-11-26 17:34:22.787028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:22.132 [2024-11-26 17:34:22.787048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:37:22.132 [2024-11-26 17:34:22.787061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:22.132 [2024-11-26 17:34:22.789429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:22.132 [2024-11-26 17:34:22.789538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:22.132 [2024-11-26 17:34:22.789646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:22.132 [2024-11-26 17:34:22.789712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:22.132 [2024-11-26 17:34:22.789877] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:22.132 [2024-11-26 17:34:22.789895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:22.132 [2024-11-26 17:34:22.789912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:37:22.132 [2024-11-26 17:34:22.789990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:22.132 [2024-11-26 17:34:22.790132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:22.132 pt1 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:22.132 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:22.133 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:22.133 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:22.133 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:22.133 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:22.133 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:22.133 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.133 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.133 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.393 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:22.393 "name": "raid_bdev1", 00:37:22.393 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:22.393 "strip_size_kb": 0, 00:37:22.393 "state": "configuring", 00:37:22.393 "raid_level": "raid1", 00:37:22.393 "superblock": true, 00:37:22.393 "num_base_bdevs": 4, 00:37:22.393 "num_base_bdevs_discovered": 2, 00:37:22.393 "num_base_bdevs_operational": 3, 00:37:22.393 "base_bdevs_list": [ 00:37:22.393 { 00:37:22.393 "name": null, 00:37:22.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:22.393 "is_configured": false, 00:37:22.393 "data_offset": 2048, 00:37:22.393 "data_size": 63488 00:37:22.393 }, 00:37:22.393 { 00:37:22.393 "name": "pt2", 00:37:22.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:22.393 "is_configured": true, 00:37:22.393 "data_offset": 2048, 00:37:22.393 "data_size": 63488 00:37:22.393 }, 00:37:22.393 { 00:37:22.393 "name": "pt3", 00:37:22.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:22.393 "is_configured": true, 00:37:22.393 "data_offset": 2048, 00:37:22.393 "data_size": 63488 00:37:22.393 }, 00:37:22.393 { 00:37:22.393 "name": null, 00:37:22.393 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:22.393 "is_configured": false, 00:37:22.393 "data_offset": 2048, 00:37:22.393 "data_size": 63488 00:37:22.393 } 00:37:22.393 ] 00:37:22.393 }' 00:37:22.393 17:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:22.393 17:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.653 [2024-11-26 17:34:23.294117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:22.653 [2024-11-26 17:34:23.294236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:22.653 [2024-11-26 17:34:23.294281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:37:22.653 [2024-11-26 17:34:23.294319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:22.653 [2024-11-26 17:34:23.294839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:22.653 [2024-11-26 17:34:23.294901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:22.653 [2024-11-26 17:34:23.295025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:22.653 [2024-11-26 17:34:23.295083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:22.653 [2024-11-26 17:34:23.295264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:37:22.653 [2024-11-26 17:34:23.295305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:22.653 [2024-11-26 17:34:23.295616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:37:22.653 [2024-11-26 17:34:23.295836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:37:22.653 [2024-11-26 17:34:23.295881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:37:22.653 [2024-11-26 17:34:23.296090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:22.653 pt4 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.653 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.913 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:22.913 "name": "raid_bdev1", 00:37:22.913 "uuid": "1765eed2-5831-4113-975a-d91b7d899215", 00:37:22.913 "strip_size_kb": 0, 00:37:22.913 "state": "online", 00:37:22.913 "raid_level": "raid1", 00:37:22.913 "superblock": true, 00:37:22.913 "num_base_bdevs": 4, 00:37:22.913 "num_base_bdevs_discovered": 3, 00:37:22.913 "num_base_bdevs_operational": 3, 00:37:22.913 "base_bdevs_list": [ 00:37:22.913 { 00:37:22.913 "name": null, 00:37:22.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:22.913 "is_configured": false, 00:37:22.913 "data_offset": 2048, 00:37:22.913 "data_size": 63488 00:37:22.913 }, 00:37:22.913 { 00:37:22.913 "name": "pt2", 00:37:22.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:22.913 "is_configured": true, 00:37:22.913 "data_offset": 2048, 00:37:22.913 "data_size": 63488 00:37:22.913 }, 00:37:22.913 { 00:37:22.913 "name": "pt3", 00:37:22.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:22.913 "is_configured": true, 00:37:22.913 "data_offset": 2048, 00:37:22.913 "data_size": 63488 00:37:22.913 }, 00:37:22.913 { 00:37:22.913 "name": "pt4", 00:37:22.913 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:22.913 "is_configured": true, 00:37:22.913 "data_offset": 2048, 00:37:22.913 "data_size": 63488 00:37:22.913 } 00:37:22.913 ] 00:37:22.913 }' 00:37:22.913 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:22.913 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:37:23.173 [2024-11-26 17:34:23.841573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:23.173 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1765eed2-5831-4113-975a-d91b7d899215 '!=' 1765eed2-5831-4113-975a-d91b7d899215 ']' 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74808 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74808 ']' 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74808 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74808 00:37:23.432 killing process with pid 74808 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74808' 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74808 00:37:23.432 [2024-11-26 17:34:23.923768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:23.432 [2024-11-26 17:34:23.923866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:23.432 17:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74808 00:37:23.432 [2024-11-26 17:34:23.923953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:23.432 [2024-11-26 17:34:23.923968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:37:23.691 [2024-11-26 17:34:24.348081] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:25.070 ************************************ 00:37:25.070 17:34:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:37:25.070 00:37:25.070 real 0m8.955s 00:37:25.070 user 0m14.141s 00:37:25.070 sys 0m1.579s 00:37:25.070 17:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:25.070 17:34:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.070 END TEST raid_superblock_test 00:37:25.070 ************************************ 00:37:25.070 17:34:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:37:25.070 17:34:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:25.070 17:34:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:25.070 17:34:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:25.070 ************************************ 00:37:25.070 START TEST raid_read_error_test 00:37:25.070 ************************************ 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XR9GUQrCML 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75301 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75301 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75301 ']' 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:25.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:25.070 17:34:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.070 [2024-11-26 17:34:25.683580] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:37:25.070 [2024-11-26 17:34:25.683726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75301 ] 00:37:25.329 [2024-11-26 17:34:25.875116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.329 [2024-11-26 17:34:25.994733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.588 [2024-11-26 17:34:26.193164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:25.588 [2024-11-26 17:34:26.193238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 BaseBdev1_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 true 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 [2024-11-26 17:34:26.617176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:37:26.158 [2024-11-26 17:34:26.617235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.158 [2024-11-26 17:34:26.617255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:37:26.158 [2024-11-26 17:34:26.617266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.158 [2024-11-26 17:34:26.619398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.158 [2024-11-26 17:34:26.619442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:26.158 BaseBdev1 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 BaseBdev2_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 true 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 [2024-11-26 17:34:26.681149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:37:26.158 [2024-11-26 17:34:26.681207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.158 [2024-11-26 17:34:26.681223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:37:26.158 [2024-11-26 17:34:26.681233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.158 [2024-11-26 17:34:26.683239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.158 [2024-11-26 17:34:26.683277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:26.158 BaseBdev2 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 BaseBdev3_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 true 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 [2024-11-26 17:34:26.760206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:37:26.158 [2024-11-26 17:34:26.760264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.158 [2024-11-26 17:34:26.760282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:26.158 [2024-11-26 17:34:26.760294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.158 [2024-11-26 17:34:26.762317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.158 [2024-11-26 17:34:26.762356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:26.158 BaseBdev3 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 BaseBdev4_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 true 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.158 [2024-11-26 17:34:26.826645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:37:26.158 [2024-11-26 17:34:26.826702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.158 [2024-11-26 17:34:26.826720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:26.158 [2024-11-26 17:34:26.826731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.158 [2024-11-26 17:34:26.828834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.158 [2024-11-26 17:34:26.828877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:37:26.158 BaseBdev4 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.158 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.159 [2024-11-26 17:34:26.838693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:26.159 [2024-11-26 17:34:26.840565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:26.159 [2024-11-26 17:34:26.840641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:26.159 [2024-11-26 17:34:26.840705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:26.159 [2024-11-26 17:34:26.840942] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:37:26.159 [2024-11-26 17:34:26.840963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:26.159 [2024-11-26 17:34:26.841198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:37:26.159 [2024-11-26 17:34:26.841368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:37:26.159 [2024-11-26 17:34:26.841384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:37:26.159 [2024-11-26 17:34:26.841557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:26.159 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:26.418 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:26.418 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.418 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.418 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.418 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.418 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:26.418 "name": "raid_bdev1", 00:37:26.418 "uuid": "ba6a2c72-3ae9-4a81-9031-76bda414da18", 00:37:26.418 "strip_size_kb": 0, 00:37:26.418 "state": "online", 00:37:26.418 "raid_level": "raid1", 00:37:26.418 "superblock": true, 00:37:26.418 "num_base_bdevs": 4, 00:37:26.418 "num_base_bdevs_discovered": 4, 00:37:26.418 "num_base_bdevs_operational": 4, 00:37:26.418 "base_bdevs_list": [ 00:37:26.418 { 00:37:26.418 "name": "BaseBdev1", 00:37:26.418 "uuid": "6d8f5070-51e4-5530-b31a-86aae67b1ad9", 00:37:26.418 "is_configured": true, 00:37:26.418 "data_offset": 2048, 00:37:26.418 "data_size": 63488 00:37:26.418 }, 00:37:26.418 { 00:37:26.418 "name": "BaseBdev2", 00:37:26.418 "uuid": "942e6115-bd8a-5183-adf9-89b499b9326e", 00:37:26.418 "is_configured": true, 00:37:26.418 "data_offset": 2048, 00:37:26.418 "data_size": 63488 00:37:26.418 }, 00:37:26.418 { 00:37:26.418 "name": "BaseBdev3", 00:37:26.418 "uuid": "d1a42f08-9991-5c39-90ac-91515d1793f8", 00:37:26.418 "is_configured": true, 00:37:26.418 "data_offset": 2048, 00:37:26.418 "data_size": 63488 00:37:26.418 }, 00:37:26.418 { 00:37:26.418 "name": "BaseBdev4", 00:37:26.418 "uuid": "8a32cdd6-a8cb-5f5e-b155-a6bd082dd196", 00:37:26.418 "is_configured": true, 00:37:26.418 "data_offset": 2048, 00:37:26.418 "data_size": 63488 00:37:26.418 } 00:37:26.418 ] 00:37:26.418 }' 00:37:26.418 17:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:26.418 17:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:26.685 17:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:37:26.685 17:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:26.685 [2024-11-26 17:34:27.359148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:37:27.631 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:37:27.631 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.632 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.891 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:27.891 "name": "raid_bdev1", 00:37:27.891 "uuid": "ba6a2c72-3ae9-4a81-9031-76bda414da18", 00:37:27.891 "strip_size_kb": 0, 00:37:27.891 "state": "online", 00:37:27.891 "raid_level": "raid1", 00:37:27.891 "superblock": true, 00:37:27.891 "num_base_bdevs": 4, 00:37:27.891 "num_base_bdevs_discovered": 4, 00:37:27.891 "num_base_bdevs_operational": 4, 00:37:27.891 "base_bdevs_list": [ 00:37:27.891 { 00:37:27.891 "name": "BaseBdev1", 00:37:27.891 "uuid": "6d8f5070-51e4-5530-b31a-86aae67b1ad9", 00:37:27.891 "is_configured": true, 00:37:27.891 "data_offset": 2048, 00:37:27.891 "data_size": 63488 00:37:27.891 }, 00:37:27.891 { 00:37:27.891 "name": "BaseBdev2", 00:37:27.891 "uuid": "942e6115-bd8a-5183-adf9-89b499b9326e", 00:37:27.891 "is_configured": true, 00:37:27.891 "data_offset": 2048, 00:37:27.891 "data_size": 63488 00:37:27.891 }, 00:37:27.891 { 00:37:27.891 "name": "BaseBdev3", 00:37:27.891 "uuid": "d1a42f08-9991-5c39-90ac-91515d1793f8", 00:37:27.891 "is_configured": true, 00:37:27.891 "data_offset": 2048, 00:37:27.891 "data_size": 63488 00:37:27.891 }, 00:37:27.891 { 00:37:27.891 "name": "BaseBdev4", 00:37:27.891 "uuid": "8a32cdd6-a8cb-5f5e-b155-a6bd082dd196", 00:37:27.891 "is_configured": true, 00:37:27.891 "data_offset": 2048, 00:37:27.891 "data_size": 63488 00:37:27.891 } 00:37:27.891 ] 00:37:27.891 }' 00:37:27.891 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:27.891 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.151 [2024-11-26 17:34:28.750893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:28.151 [2024-11-26 17:34:28.750934] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:28.151 [2024-11-26 17:34:28.753985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:28.151 [2024-11-26 17:34:28.754055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:28.151 [2024-11-26 17:34:28.754192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:28.151 [2024-11-26 17:34:28.754210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.151 { 00:37:28.151 "results": [ 00:37:28.151 { 00:37:28.151 "job": "raid_bdev1", 00:37:28.151 "core_mask": "0x1", 00:37:28.151 "workload": "randrw", 00:37:28.151 "percentage": 50, 00:37:28.151 "status": "finished", 00:37:28.151 "queue_depth": 1, 00:37:28.151 "io_size": 131072, 00:37:28.151 "runtime": 1.392873, 00:37:28.151 "iops": 10142.346071752414, 00:37:28.151 "mibps": 1267.7932589690517, 00:37:28.151 "io_failed": 0, 00:37:28.151 "io_timeout": 0, 00:37:28.151 "avg_latency_us": 95.6371085378644, 00:37:28.151 "min_latency_us": 24.034934497816593, 00:37:28.151 "max_latency_us": 1652.709170305677 00:37:28.151 } 00:37:28.151 ], 00:37:28.151 "core_count": 1 00:37:28.151 } 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75301 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75301 ']' 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75301 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75301 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75301' 00:37:28.151 killing process with pid 75301 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75301 00:37:28.151 [2024-11-26 17:34:28.804775] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:28.151 17:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75301 00:37:28.719 [2024-11-26 17:34:29.166910] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:30.096 17:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XR9GUQrCML 00:37:30.096 17:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:37:30.096 17:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:37:30.096 17:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:37:30.096 17:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:37:30.096 17:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:30.096 17:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:37:30.096 17:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:37:30.096 00:37:30.096 real 0m4.910s 00:37:30.096 user 0m5.775s 00:37:30.096 sys 0m0.607s 00:37:30.096 17:34:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.096 17:34:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:30.096 ************************************ 00:37:30.096 END TEST raid_read_error_test 00:37:30.096 ************************************ 00:37:30.096 17:34:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:37:30.096 17:34:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:30.096 17:34:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.096 17:34:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:30.096 ************************************ 00:37:30.096 START TEST raid_write_error_test 00:37:30.096 ************************************ 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XQWaInO0EJ 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75452 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75452 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75452 ']' 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.096 17:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:30.096 [2024-11-26 17:34:30.669909] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:37:30.096 [2024-11-26 17:34:30.670041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75452 ] 00:37:30.356 [2024-11-26 17:34:30.845328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.356 [2024-11-26 17:34:30.972731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.615 [2024-11-26 17:34:31.205838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:30.615 [2024-11-26 17:34:31.205915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:31.183 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:31.183 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:37:31.183 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:31.183 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:31.183 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.183 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.183 BaseBdev1_malloc 00:37:31.183 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 true 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 [2024-11-26 17:34:31.641011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:37:31.184 [2024-11-26 17:34:31.641090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:31.184 [2024-11-26 17:34:31.641118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:37:31.184 [2024-11-26 17:34:31.641132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:31.184 [2024-11-26 17:34:31.643496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:31.184 [2024-11-26 17:34:31.643571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:31.184 BaseBdev1 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 BaseBdev2_malloc 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 true 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 [2024-11-26 17:34:31.712811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:37:31.184 [2024-11-26 17:34:31.712875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:31.184 [2024-11-26 17:34:31.712893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:37:31.184 [2024-11-26 17:34:31.712906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:31.184 [2024-11-26 17:34:31.715115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:31.184 [2024-11-26 17:34:31.715158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:31.184 BaseBdev2 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 BaseBdev3_malloc 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 true 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 [2024-11-26 17:34:31.798211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:37:31.184 [2024-11-26 17:34:31.798263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:31.184 [2024-11-26 17:34:31.798279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:31.184 [2024-11-26 17:34:31.798289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:31.184 [2024-11-26 17:34:31.800483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:31.184 [2024-11-26 17:34:31.800535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:31.184 BaseBdev3 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 BaseBdev4_malloc 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 true 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.184 [2024-11-26 17:34:31.867147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:37:31.184 [2024-11-26 17:34:31.867212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:31.184 [2024-11-26 17:34:31.867234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:31.184 [2024-11-26 17:34:31.867245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:31.184 [2024-11-26 17:34:31.869605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:31.184 [2024-11-26 17:34:31.869647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:37:31.184 BaseBdev4 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.184 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.444 [2024-11-26 17:34:31.879170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:31.444 [2024-11-26 17:34:31.881142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:31.444 [2024-11-26 17:34:31.881224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:31.444 [2024-11-26 17:34:31.881285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:31.444 [2024-11-26 17:34:31.881525] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:37:31.444 [2024-11-26 17:34:31.881547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:31.444 [2024-11-26 17:34:31.881794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:37:31.444 [2024-11-26 17:34:31.881975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:37:31.444 [2024-11-26 17:34:31.881991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:37:31.444 [2024-11-26 17:34:31.882179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:31.444 "name": "raid_bdev1", 00:37:31.444 "uuid": "9ccd07ab-3040-4296-8602-7c719c869153", 00:37:31.444 "strip_size_kb": 0, 00:37:31.444 "state": "online", 00:37:31.444 "raid_level": "raid1", 00:37:31.444 "superblock": true, 00:37:31.444 "num_base_bdevs": 4, 00:37:31.444 "num_base_bdevs_discovered": 4, 00:37:31.444 "num_base_bdevs_operational": 4, 00:37:31.444 "base_bdevs_list": [ 00:37:31.444 { 00:37:31.444 "name": "BaseBdev1", 00:37:31.444 "uuid": "e7a5d90b-ca76-529f-8fdb-b19de1c25420", 00:37:31.444 "is_configured": true, 00:37:31.444 "data_offset": 2048, 00:37:31.444 "data_size": 63488 00:37:31.444 }, 00:37:31.444 { 00:37:31.444 "name": "BaseBdev2", 00:37:31.444 "uuid": "4aa7f36a-f783-5048-bde9-257eaa8c580e", 00:37:31.444 "is_configured": true, 00:37:31.444 "data_offset": 2048, 00:37:31.444 "data_size": 63488 00:37:31.444 }, 00:37:31.444 { 00:37:31.444 "name": "BaseBdev3", 00:37:31.444 "uuid": "04d68367-bc04-5033-8816-35cd89df8f0a", 00:37:31.444 "is_configured": true, 00:37:31.444 "data_offset": 2048, 00:37:31.444 "data_size": 63488 00:37:31.444 }, 00:37:31.444 { 00:37:31.444 "name": "BaseBdev4", 00:37:31.444 "uuid": "ef2222e6-ba49-57f8-8ad5-361f9782c569", 00:37:31.444 "is_configured": true, 00:37:31.444 "data_offset": 2048, 00:37:31.444 "data_size": 63488 00:37:31.444 } 00:37:31.444 ] 00:37:31.444 }' 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:31.444 17:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.704 17:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:31.704 17:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:37:31.962 [2024-11-26 17:34:32.455631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.900 [2024-11-26 17:34:33.342763] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:37:32.900 [2024-11-26 17:34:33.342824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:32.900 [2024-11-26 17:34:33.343055] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:32.900 "name": "raid_bdev1", 00:37:32.900 "uuid": "9ccd07ab-3040-4296-8602-7c719c869153", 00:37:32.900 "strip_size_kb": 0, 00:37:32.900 "state": "online", 00:37:32.900 "raid_level": "raid1", 00:37:32.900 "superblock": true, 00:37:32.900 "num_base_bdevs": 4, 00:37:32.900 "num_base_bdevs_discovered": 3, 00:37:32.900 "num_base_bdevs_operational": 3, 00:37:32.900 "base_bdevs_list": [ 00:37:32.900 { 00:37:32.900 "name": null, 00:37:32.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.900 "is_configured": false, 00:37:32.900 "data_offset": 0, 00:37:32.900 "data_size": 63488 00:37:32.900 }, 00:37:32.900 { 00:37:32.900 "name": "BaseBdev2", 00:37:32.900 "uuid": "4aa7f36a-f783-5048-bde9-257eaa8c580e", 00:37:32.900 "is_configured": true, 00:37:32.900 "data_offset": 2048, 00:37:32.900 "data_size": 63488 00:37:32.900 }, 00:37:32.900 { 00:37:32.900 "name": "BaseBdev3", 00:37:32.900 "uuid": "04d68367-bc04-5033-8816-35cd89df8f0a", 00:37:32.900 "is_configured": true, 00:37:32.900 "data_offset": 2048, 00:37:32.900 "data_size": 63488 00:37:32.900 }, 00:37:32.900 { 00:37:32.900 "name": "BaseBdev4", 00:37:32.900 "uuid": "ef2222e6-ba49-57f8-8ad5-361f9782c569", 00:37:32.900 "is_configured": true, 00:37:32.900 "data_offset": 2048, 00:37:32.900 "data_size": 63488 00:37:32.900 } 00:37:32.900 ] 00:37:32.900 }' 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:32.900 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:33.160 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:33.160 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.160 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:33.160 [2024-11-26 17:34:33.847891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:33.160 [2024-11-26 17:34:33.847940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:33.160 [2024-11-26 17:34:33.851033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:33.160 [2024-11-26 17:34:33.851088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:33.160 [2024-11-26 17:34:33.851199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:33.160 [2024-11-26 17:34:33.851221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:37:33.160 { 00:37:33.160 "results": [ 00:37:33.160 { 00:37:33.160 "job": "raid_bdev1", 00:37:33.160 "core_mask": "0x1", 00:37:33.160 "workload": "randrw", 00:37:33.160 "percentage": 50, 00:37:33.160 "status": "finished", 00:37:33.160 "queue_depth": 1, 00:37:33.160 "io_size": 131072, 00:37:33.160 "runtime": 1.392918, 00:37:33.160 "iops": 11144.231031546724, 00:37:33.160 "mibps": 1393.0288789433405, 00:37:33.160 "io_failed": 0, 00:37:33.160 "io_timeout": 0, 00:37:33.160 "avg_latency_us": 86.91373943777468, 00:37:33.160 "min_latency_us": 24.034934497816593, 00:37:33.160 "max_latency_us": 1438.071615720524 00:37:33.160 } 00:37:33.160 ], 00:37:33.160 "core_count": 1 00:37:33.160 } 00:37:33.160 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.160 17:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75452 00:37:33.419 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75452 ']' 00:37:33.419 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75452 00:37:33.419 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:37:33.419 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:33.419 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75452 00:37:33.419 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:33.419 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:33.419 killing process with pid 75452 00:37:33.419 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75452' 00:37:33.419 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75452 00:37:33.419 [2024-11-26 17:34:33.899492] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:33.419 17:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75452 00:37:33.679 [2024-11-26 17:34:34.253750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:35.060 17:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XQWaInO0EJ 00:37:35.060 17:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:37:35.060 17:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:37:35.060 17:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:37:35.060 17:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:37:35.060 17:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:35.060 17:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:37:35.060 17:34:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:37:35.060 00:37:35.060 real 0m4.935s 00:37:35.060 user 0m5.900s 00:37:35.060 sys 0m0.597s 00:37:35.060 17:34:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.060 17:34:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:37:35.060 ************************************ 00:37:35.060 END TEST raid_write_error_test 00:37:35.060 ************************************ 00:37:35.060 17:34:35 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:37:35.060 17:34:35 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:37:35.060 17:34:35 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:37:35.060 17:34:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:37:35.060 17:34:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.060 17:34:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:35.060 ************************************ 00:37:35.060 START TEST raid_rebuild_test 00:37:35.060 ************************************ 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75590 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75590 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75590 ']' 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:35.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:35.060 17:34:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:35.060 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:35.060 Zero copy mechanism will not be used. 00:37:35.060 [2024-11-26 17:34:35.671078] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:37:35.060 [2024-11-26 17:34:35.671204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75590 ] 00:37:35.319 [2024-11-26 17:34:35.839661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.319 [2024-11-26 17:34:35.972015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.578 [2024-11-26 17:34:36.191751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:35.578 [2024-11-26 17:34:36.191830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.146 BaseBdev1_malloc 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.146 [2024-11-26 17:34:36.630199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:36.146 [2024-11-26 17:34:36.630285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:36.146 [2024-11-26 17:34:36.630311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:36.146 [2024-11-26 17:34:36.630328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:36.146 [2024-11-26 17:34:36.632896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:36.146 [2024-11-26 17:34:36.632954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:36.146 BaseBdev1 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.146 BaseBdev2_malloc 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.146 [2024-11-26 17:34:36.693896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:36.146 [2024-11-26 17:34:36.694035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:36.146 [2024-11-26 17:34:36.694073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:36.146 [2024-11-26 17:34:36.694089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:36.146 [2024-11-26 17:34:36.696650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:36.146 [2024-11-26 17:34:36.696700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:36.146 BaseBdev2 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.146 spare_malloc 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:36.146 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.147 spare_delay 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.147 [2024-11-26 17:34:36.778683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:36.147 [2024-11-26 17:34:36.778812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:36.147 [2024-11-26 17:34:36.778843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:37:36.147 [2024-11-26 17:34:36.778857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:36.147 [2024-11-26 17:34:36.781045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:36.147 [2024-11-26 17:34:36.781093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:36.147 spare 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.147 [2024-11-26 17:34:36.790768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:36.147 [2024-11-26 17:34:36.792646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:36.147 [2024-11-26 17:34:36.792759] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:36.147 [2024-11-26 17:34:36.792774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:37:36.147 [2024-11-26 17:34:36.793094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:36.147 [2024-11-26 17:34:36.793307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:36.147 [2024-11-26 17:34:36.793321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:36.147 [2024-11-26 17:34:36.793563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.147 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.407 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:36.407 "name": "raid_bdev1", 00:37:36.407 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:36.407 "strip_size_kb": 0, 00:37:36.407 "state": "online", 00:37:36.407 "raid_level": "raid1", 00:37:36.407 "superblock": false, 00:37:36.407 "num_base_bdevs": 2, 00:37:36.407 "num_base_bdevs_discovered": 2, 00:37:36.407 "num_base_bdevs_operational": 2, 00:37:36.407 "base_bdevs_list": [ 00:37:36.407 { 00:37:36.407 "name": "BaseBdev1", 00:37:36.407 "uuid": "4f90f063-4638-5684-bde0-90f4739765b3", 00:37:36.407 "is_configured": true, 00:37:36.407 "data_offset": 0, 00:37:36.407 "data_size": 65536 00:37:36.407 }, 00:37:36.407 { 00:37:36.407 "name": "BaseBdev2", 00:37:36.407 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:36.407 "is_configured": true, 00:37:36.407 "data_offset": 0, 00:37:36.407 "data_size": 65536 00:37:36.407 } 00:37:36.407 ] 00:37:36.407 }' 00:37:36.407 17:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:36.407 17:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:37:36.667 [2024-11-26 17:34:37.210341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:36.667 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:36.927 [2024-11-26 17:34:37.489706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:36.927 /dev/nbd0 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:36.927 1+0 records in 00:37:36.927 1+0 records out 00:37:36.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785245 s, 5.2 MB/s 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:37:36.927 17:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:37:42.222 65536+0 records in 00:37:42.222 65536+0 records out 00:37:42.222 33554432 bytes (34 MB, 32 MiB) copied, 4.8245 s, 7.0 MB/s 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:42.222 [2024-11-26 17:34:42.627266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.222 [2024-11-26 17:34:42.663304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:42.222 "name": "raid_bdev1", 00:37:42.222 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:42.222 "strip_size_kb": 0, 00:37:42.222 "state": "online", 00:37:42.222 "raid_level": "raid1", 00:37:42.222 "superblock": false, 00:37:42.222 "num_base_bdevs": 2, 00:37:42.222 "num_base_bdevs_discovered": 1, 00:37:42.222 "num_base_bdevs_operational": 1, 00:37:42.222 "base_bdevs_list": [ 00:37:42.222 { 00:37:42.222 "name": null, 00:37:42.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:42.222 "is_configured": false, 00:37:42.222 "data_offset": 0, 00:37:42.222 "data_size": 65536 00:37:42.222 }, 00:37:42.222 { 00:37:42.222 "name": "BaseBdev2", 00:37:42.222 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:42.222 "is_configured": true, 00:37:42.222 "data_offset": 0, 00:37:42.222 "data_size": 65536 00:37:42.222 } 00:37:42.222 ] 00:37:42.222 }' 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:42.222 17:34:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.482 17:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:42.482 17:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.482 17:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:42.482 [2024-11-26 17:34:43.170517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:42.740 [2024-11-26 17:34:43.190995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:37:42.740 17:34:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.740 17:34:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:42.740 [2024-11-26 17:34:43.193269] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:43.679 "name": "raid_bdev1", 00:37:43.679 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:43.679 "strip_size_kb": 0, 00:37:43.679 "state": "online", 00:37:43.679 "raid_level": "raid1", 00:37:43.679 "superblock": false, 00:37:43.679 "num_base_bdevs": 2, 00:37:43.679 "num_base_bdevs_discovered": 2, 00:37:43.679 "num_base_bdevs_operational": 2, 00:37:43.679 "process": { 00:37:43.679 "type": "rebuild", 00:37:43.679 "target": "spare", 00:37:43.679 "progress": { 00:37:43.679 "blocks": 20480, 00:37:43.679 "percent": 31 00:37:43.679 } 00:37:43.679 }, 00:37:43.679 "base_bdevs_list": [ 00:37:43.679 { 00:37:43.679 "name": "spare", 00:37:43.679 "uuid": "b16bfa61-2ab2-549a-82ce-e1e7e16d9bd7", 00:37:43.679 "is_configured": true, 00:37:43.679 "data_offset": 0, 00:37:43.679 "data_size": 65536 00:37:43.679 }, 00:37:43.679 { 00:37:43.679 "name": "BaseBdev2", 00:37:43.679 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:43.679 "is_configured": true, 00:37:43.679 "data_offset": 0, 00:37:43.679 "data_size": 65536 00:37:43.679 } 00:37:43.679 ] 00:37:43.679 }' 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.679 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.679 [2024-11-26 17:34:44.356029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:43.938 [2024-11-26 17:34:44.399796] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:43.938 [2024-11-26 17:34:44.399981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:43.938 [2024-11-26 17:34:44.400042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:43.938 [2024-11-26 17:34:44.400089] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:43.938 "name": "raid_bdev1", 00:37:43.938 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:43.938 "strip_size_kb": 0, 00:37:43.938 "state": "online", 00:37:43.938 "raid_level": "raid1", 00:37:43.938 "superblock": false, 00:37:43.938 "num_base_bdevs": 2, 00:37:43.938 "num_base_bdevs_discovered": 1, 00:37:43.938 "num_base_bdevs_operational": 1, 00:37:43.938 "base_bdevs_list": [ 00:37:43.938 { 00:37:43.938 "name": null, 00:37:43.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.938 "is_configured": false, 00:37:43.938 "data_offset": 0, 00:37:43.938 "data_size": 65536 00:37:43.938 }, 00:37:43.938 { 00:37:43.938 "name": "BaseBdev2", 00:37:43.938 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:43.938 "is_configured": true, 00:37:43.938 "data_offset": 0, 00:37:43.938 "data_size": 65536 00:37:43.938 } 00:37:43.938 ] 00:37:43.938 }' 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:43.938 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.197 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:44.197 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:44.197 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:44.197 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:44.197 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:44.197 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:44.197 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:44.197 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.197 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.456 17:34:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.456 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:44.456 "name": "raid_bdev1", 00:37:44.456 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:44.456 "strip_size_kb": 0, 00:37:44.456 "state": "online", 00:37:44.456 "raid_level": "raid1", 00:37:44.456 "superblock": false, 00:37:44.456 "num_base_bdevs": 2, 00:37:44.456 "num_base_bdevs_discovered": 1, 00:37:44.456 "num_base_bdevs_operational": 1, 00:37:44.456 "base_bdevs_list": [ 00:37:44.456 { 00:37:44.456 "name": null, 00:37:44.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:44.456 "is_configured": false, 00:37:44.456 "data_offset": 0, 00:37:44.456 "data_size": 65536 00:37:44.456 }, 00:37:44.456 { 00:37:44.456 "name": "BaseBdev2", 00:37:44.456 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:44.456 "is_configured": true, 00:37:44.456 "data_offset": 0, 00:37:44.456 "data_size": 65536 00:37:44.456 } 00:37:44.456 ] 00:37:44.456 }' 00:37:44.456 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:44.456 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:44.456 17:34:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:44.456 17:34:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:44.456 17:34:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:44.456 17:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:44.456 17:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:44.456 [2024-11-26 17:34:45.031025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:44.456 [2024-11-26 17:34:45.047050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:37:44.456 17:34:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:44.456 17:34:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:37:44.456 [2024-11-26 17:34:45.048936] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:45.391 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:45.391 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:45.391 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:45.391 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:45.391 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:45.391 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:45.391 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:45.391 17:34:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.391 17:34:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.391 17:34:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:45.651 "name": "raid_bdev1", 00:37:45.651 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:45.651 "strip_size_kb": 0, 00:37:45.651 "state": "online", 00:37:45.651 "raid_level": "raid1", 00:37:45.651 "superblock": false, 00:37:45.651 "num_base_bdevs": 2, 00:37:45.651 "num_base_bdevs_discovered": 2, 00:37:45.651 "num_base_bdevs_operational": 2, 00:37:45.651 "process": { 00:37:45.651 "type": "rebuild", 00:37:45.651 "target": "spare", 00:37:45.651 "progress": { 00:37:45.651 "blocks": 20480, 00:37:45.651 "percent": 31 00:37:45.651 } 00:37:45.651 }, 00:37:45.651 "base_bdevs_list": [ 00:37:45.651 { 00:37:45.651 "name": "spare", 00:37:45.651 "uuid": "b16bfa61-2ab2-549a-82ce-e1e7e16d9bd7", 00:37:45.651 "is_configured": true, 00:37:45.651 "data_offset": 0, 00:37:45.651 "data_size": 65536 00:37:45.651 }, 00:37:45.651 { 00:37:45.651 "name": "BaseBdev2", 00:37:45.651 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:45.651 "is_configured": true, 00:37:45.651 "data_offset": 0, 00:37:45.651 "data_size": 65536 00:37:45.651 } 00:37:45.651 ] 00:37:45.651 }' 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=381 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:45.651 "name": "raid_bdev1", 00:37:45.651 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:45.651 "strip_size_kb": 0, 00:37:45.651 "state": "online", 00:37:45.651 "raid_level": "raid1", 00:37:45.651 "superblock": false, 00:37:45.651 "num_base_bdevs": 2, 00:37:45.651 "num_base_bdevs_discovered": 2, 00:37:45.651 "num_base_bdevs_operational": 2, 00:37:45.651 "process": { 00:37:45.651 "type": "rebuild", 00:37:45.651 "target": "spare", 00:37:45.651 "progress": { 00:37:45.651 "blocks": 22528, 00:37:45.651 "percent": 34 00:37:45.651 } 00:37:45.651 }, 00:37:45.651 "base_bdevs_list": [ 00:37:45.651 { 00:37:45.651 "name": "spare", 00:37:45.651 "uuid": "b16bfa61-2ab2-549a-82ce-e1e7e16d9bd7", 00:37:45.651 "is_configured": true, 00:37:45.651 "data_offset": 0, 00:37:45.651 "data_size": 65536 00:37:45.651 }, 00:37:45.651 { 00:37:45.651 "name": "BaseBdev2", 00:37:45.651 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:45.651 "is_configured": true, 00:37:45.651 "data_offset": 0, 00:37:45.651 "data_size": 65536 00:37:45.651 } 00:37:45.651 ] 00:37:45.651 }' 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:45.651 17:34:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:47.026 17:34:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.027 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:47.027 "name": "raid_bdev1", 00:37:47.027 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:47.027 "strip_size_kb": 0, 00:37:47.027 "state": "online", 00:37:47.027 "raid_level": "raid1", 00:37:47.027 "superblock": false, 00:37:47.027 "num_base_bdevs": 2, 00:37:47.027 "num_base_bdevs_discovered": 2, 00:37:47.027 "num_base_bdevs_operational": 2, 00:37:47.027 "process": { 00:37:47.027 "type": "rebuild", 00:37:47.027 "target": "spare", 00:37:47.027 "progress": { 00:37:47.027 "blocks": 45056, 00:37:47.027 "percent": 68 00:37:47.027 } 00:37:47.027 }, 00:37:47.027 "base_bdevs_list": [ 00:37:47.027 { 00:37:47.027 "name": "spare", 00:37:47.027 "uuid": "b16bfa61-2ab2-549a-82ce-e1e7e16d9bd7", 00:37:47.027 "is_configured": true, 00:37:47.027 "data_offset": 0, 00:37:47.027 "data_size": 65536 00:37:47.027 }, 00:37:47.027 { 00:37:47.027 "name": "BaseBdev2", 00:37:47.027 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:47.027 "is_configured": true, 00:37:47.027 "data_offset": 0, 00:37:47.027 "data_size": 65536 00:37:47.027 } 00:37:47.027 ] 00:37:47.027 }' 00:37:47.027 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:47.027 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:47.027 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:47.027 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:47.027 17:34:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:47.670 [2024-11-26 17:34:48.264599] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:47.670 [2024-11-26 17:34:48.264754] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:47.670 [2024-11-26 17:34:48.264814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:47.928 "name": "raid_bdev1", 00:37:47.928 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:47.928 "strip_size_kb": 0, 00:37:47.928 "state": "online", 00:37:47.928 "raid_level": "raid1", 00:37:47.928 "superblock": false, 00:37:47.928 "num_base_bdevs": 2, 00:37:47.928 "num_base_bdevs_discovered": 2, 00:37:47.928 "num_base_bdevs_operational": 2, 00:37:47.928 "base_bdevs_list": [ 00:37:47.928 { 00:37:47.928 "name": "spare", 00:37:47.928 "uuid": "b16bfa61-2ab2-549a-82ce-e1e7e16d9bd7", 00:37:47.928 "is_configured": true, 00:37:47.928 "data_offset": 0, 00:37:47.928 "data_size": 65536 00:37:47.928 }, 00:37:47.928 { 00:37:47.928 "name": "BaseBdev2", 00:37:47.928 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:47.928 "is_configured": true, 00:37:47.928 "data_offset": 0, 00:37:47.928 "data_size": 65536 00:37:47.928 } 00:37:47.928 ] 00:37:47.928 }' 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.928 17:34:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:48.187 "name": "raid_bdev1", 00:37:48.187 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:48.187 "strip_size_kb": 0, 00:37:48.187 "state": "online", 00:37:48.187 "raid_level": "raid1", 00:37:48.187 "superblock": false, 00:37:48.187 "num_base_bdevs": 2, 00:37:48.187 "num_base_bdevs_discovered": 2, 00:37:48.187 "num_base_bdevs_operational": 2, 00:37:48.187 "base_bdevs_list": [ 00:37:48.187 { 00:37:48.187 "name": "spare", 00:37:48.187 "uuid": "b16bfa61-2ab2-549a-82ce-e1e7e16d9bd7", 00:37:48.187 "is_configured": true, 00:37:48.187 "data_offset": 0, 00:37:48.187 "data_size": 65536 00:37:48.187 }, 00:37:48.187 { 00:37:48.187 "name": "BaseBdev2", 00:37:48.187 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:48.187 "is_configured": true, 00:37:48.187 "data_offset": 0, 00:37:48.187 "data_size": 65536 00:37:48.187 } 00:37:48.187 ] 00:37:48.187 }' 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:48.187 "name": "raid_bdev1", 00:37:48.187 "uuid": "62bee0b0-264e-48e3-a304-679fe6046fd8", 00:37:48.187 "strip_size_kb": 0, 00:37:48.187 "state": "online", 00:37:48.187 "raid_level": "raid1", 00:37:48.187 "superblock": false, 00:37:48.187 "num_base_bdevs": 2, 00:37:48.187 "num_base_bdevs_discovered": 2, 00:37:48.187 "num_base_bdevs_operational": 2, 00:37:48.187 "base_bdevs_list": [ 00:37:48.187 { 00:37:48.187 "name": "spare", 00:37:48.187 "uuid": "b16bfa61-2ab2-549a-82ce-e1e7e16d9bd7", 00:37:48.187 "is_configured": true, 00:37:48.187 "data_offset": 0, 00:37:48.187 "data_size": 65536 00:37:48.187 }, 00:37:48.187 { 00:37:48.187 "name": "BaseBdev2", 00:37:48.187 "uuid": "7ed07c24-c5ad-52df-8db6-d5e494fbf68b", 00:37:48.187 "is_configured": true, 00:37:48.187 "data_offset": 0, 00:37:48.187 "data_size": 65536 00:37:48.187 } 00:37:48.187 ] 00:37:48.187 }' 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:48.187 17:34:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:48.754 17:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:48.754 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.754 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:48.754 [2024-11-26 17:34:49.227415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:48.754 [2024-11-26 17:34:49.227500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:48.754 [2024-11-26 17:34:49.227638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:48.754 [2024-11-26 17:34:49.227754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:48.754 [2024-11-26 17:34:49.227797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:48.754 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:48.755 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:49.013 /dev/nbd0 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:49.014 1+0 records in 00:37:49.014 1+0 records out 00:37:49.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268363 s, 15.3 MB/s 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:49.014 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:37:49.272 /dev/nbd1 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:49.272 1+0 records in 00:37:49.272 1+0 records out 00:37:49.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494953 s, 8.3 MB/s 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:49.272 17:34:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:37:49.529 17:34:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:37:49.529 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:49.529 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:49.529 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:49.529 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:37:49.529 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:49.529 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:49.787 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:49.787 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:49.787 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:49.787 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:49.787 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:49.787 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:49.787 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:49.787 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:49.787 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:49.787 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75590 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75590 ']' 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75590 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75590 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75590' 00:37:50.046 killing process with pid 75590 00:37:50.046 Received shutdown signal, test time was about 60.000000 seconds 00:37:50.046 00:37:50.046 Latency(us) 00:37:50.046 [2024-11-26T17:34:50.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.046 [2024-11-26T17:34:50.741Z] =================================================================================================================== 00:37:50.046 [2024-11-26T17:34:50.741Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75590 00:37:50.046 [2024-11-26 17:34:50.674454] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:50.046 17:34:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75590 00:37:50.615 [2024-11-26 17:34:51.040709] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:37:51.995 00:37:51.995 real 0m16.710s 00:37:51.995 user 0m18.624s 00:37:51.995 sys 0m3.299s 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:51.995 ************************************ 00:37:51.995 END TEST raid_rebuild_test 00:37:51.995 ************************************ 00:37:51.995 17:34:52 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:37:51.995 17:34:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:37:51.995 17:34:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:51.995 17:34:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:51.995 ************************************ 00:37:51.995 START TEST raid_rebuild_test_sb 00:37:51.995 ************************************ 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76025 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76025 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76025 ']' 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:51.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:51.995 17:34:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:51.995 [2024-11-26 17:34:52.453026] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:37:51.995 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:51.995 Zero copy mechanism will not be used. 00:37:51.995 [2024-11-26 17:34:52.453255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76025 ] 00:37:51.995 [2024-11-26 17:34:52.614119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.254 [2024-11-26 17:34:52.733436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:52.254 [2024-11-26 17:34:52.939085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:52.254 [2024-11-26 17:34:52.939149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:52.823 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:52.823 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:37:52.823 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:52.823 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:52.823 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.823 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:52.823 BaseBdev1_malloc 00:37:52.823 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.823 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:52.823 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.823 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:52.823 [2024-11-26 17:34:53.330777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:52.823 [2024-11-26 17:34:53.330843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:52.823 [2024-11-26 17:34:53.330865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:52.823 [2024-11-26 17:34:53.330876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:52.823 [2024-11-26 17:34:53.333068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:52.823 [2024-11-26 17:34:53.333197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:52.823 BaseBdev1 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:52.824 BaseBdev2_malloc 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:52.824 [2024-11-26 17:34:53.388995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:52.824 [2024-11-26 17:34:53.389072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:52.824 [2024-11-26 17:34:53.389099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:52.824 [2024-11-26 17:34:53.389117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:52.824 [2024-11-26 17:34:53.391435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:52.824 [2024-11-26 17:34:53.391478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:52.824 BaseBdev2 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:52.824 spare_malloc 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:52.824 spare_delay 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:52.824 [2024-11-26 17:34:53.470053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:52.824 [2024-11-26 17:34:53.470180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:52.824 [2024-11-26 17:34:53.470208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:37:52.824 [2024-11-26 17:34:53.470219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:52.824 [2024-11-26 17:34:53.472611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:52.824 [2024-11-26 17:34:53.472658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:52.824 spare 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:52.824 [2024-11-26 17:34:53.482085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:52.824 [2024-11-26 17:34:53.484043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:52.824 [2024-11-26 17:34:53.484237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:52.824 [2024-11-26 17:34:53.484255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:52.824 [2024-11-26 17:34:53.484571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:52.824 [2024-11-26 17:34:53.484753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:52.824 [2024-11-26 17:34:53.484764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:52.824 [2024-11-26 17:34:53.484946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:52.824 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.084 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:53.084 "name": "raid_bdev1", 00:37:53.084 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:37:53.084 "strip_size_kb": 0, 00:37:53.084 "state": "online", 00:37:53.084 "raid_level": "raid1", 00:37:53.084 "superblock": true, 00:37:53.084 "num_base_bdevs": 2, 00:37:53.084 "num_base_bdevs_discovered": 2, 00:37:53.084 "num_base_bdevs_operational": 2, 00:37:53.084 "base_bdevs_list": [ 00:37:53.084 { 00:37:53.084 "name": "BaseBdev1", 00:37:53.084 "uuid": "31c275d1-90e0-501f-8175-9b37ecb4acfc", 00:37:53.084 "is_configured": true, 00:37:53.084 "data_offset": 2048, 00:37:53.084 "data_size": 63488 00:37:53.084 }, 00:37:53.084 { 00:37:53.084 "name": "BaseBdev2", 00:37:53.084 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:37:53.084 "is_configured": true, 00:37:53.084 "data_offset": 2048, 00:37:53.084 "data_size": 63488 00:37:53.084 } 00:37:53.084 ] 00:37:53.084 }' 00:37:53.084 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:53.084 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:53.344 [2024-11-26 17:34:53.949632] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:53.344 17:34:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:53.344 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:53.603 [2024-11-26 17:34:54.224863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:53.603 /dev/nbd0 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:53.603 1+0 records in 00:37:53.603 1+0 records out 00:37:53.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296166 s, 13.8 MB/s 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:37:53.603 17:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:37:57.791 63488+0 records in 00:37:57.791 63488+0 records out 00:37:57.791 32505856 bytes (33 MB, 31 MiB) copied, 4.08149 s, 8.0 MB/s 00:37:57.791 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:37:57.791 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:57.791 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:57.791 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:57.791 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:37:57.791 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:57.791 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:58.049 [2024-11-26 17:34:58.630904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:58.049 [2024-11-26 17:34:58.670917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.049 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:58.049 "name": "raid_bdev1", 00:37:58.049 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:37:58.049 "strip_size_kb": 0, 00:37:58.049 "state": "online", 00:37:58.049 "raid_level": "raid1", 00:37:58.049 "superblock": true, 00:37:58.049 "num_base_bdevs": 2, 00:37:58.049 "num_base_bdevs_discovered": 1, 00:37:58.049 "num_base_bdevs_operational": 1, 00:37:58.049 "base_bdevs_list": [ 00:37:58.049 { 00:37:58.049 "name": null, 00:37:58.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:58.049 "is_configured": false, 00:37:58.049 "data_offset": 0, 00:37:58.049 "data_size": 63488 00:37:58.049 }, 00:37:58.049 { 00:37:58.049 "name": "BaseBdev2", 00:37:58.049 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:37:58.049 "is_configured": true, 00:37:58.050 "data_offset": 2048, 00:37:58.050 "data_size": 63488 00:37:58.050 } 00:37:58.050 ] 00:37:58.050 }' 00:37:58.050 17:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:58.050 17:34:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:58.616 17:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:58.616 17:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.616 17:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:58.616 [2024-11-26 17:34:59.110192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:58.616 [2024-11-26 17:34:59.127632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:37:58.616 17:34:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.616 [2024-11-26 17:34:59.129543] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:58.616 17:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:59.548 "name": "raid_bdev1", 00:37:59.548 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:37:59.548 "strip_size_kb": 0, 00:37:59.548 "state": "online", 00:37:59.548 "raid_level": "raid1", 00:37:59.548 "superblock": true, 00:37:59.548 "num_base_bdevs": 2, 00:37:59.548 "num_base_bdevs_discovered": 2, 00:37:59.548 "num_base_bdevs_operational": 2, 00:37:59.548 "process": { 00:37:59.548 "type": "rebuild", 00:37:59.548 "target": "spare", 00:37:59.548 "progress": { 00:37:59.548 "blocks": 20480, 00:37:59.548 "percent": 32 00:37:59.548 } 00:37:59.548 }, 00:37:59.548 "base_bdevs_list": [ 00:37:59.548 { 00:37:59.548 "name": "spare", 00:37:59.548 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:37:59.548 "is_configured": true, 00:37:59.548 "data_offset": 2048, 00:37:59.548 "data_size": 63488 00:37:59.548 }, 00:37:59.548 { 00:37:59.548 "name": "BaseBdev2", 00:37:59.548 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:37:59.548 "is_configured": true, 00:37:59.548 "data_offset": 2048, 00:37:59.548 "data_size": 63488 00:37:59.548 } 00:37:59.548 ] 00:37:59.548 }' 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:59.548 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:59.806 [2024-11-26 17:35:00.292919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:59.806 [2024-11-26 17:35:00.335270] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:59.806 [2024-11-26 17:35:00.335411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:59.806 [2024-11-26 17:35:00.335474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:59.806 [2024-11-26 17:35:00.335508] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:59.806 "name": "raid_bdev1", 00:37:59.806 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:37:59.806 "strip_size_kb": 0, 00:37:59.806 "state": "online", 00:37:59.806 "raid_level": "raid1", 00:37:59.806 "superblock": true, 00:37:59.806 "num_base_bdevs": 2, 00:37:59.806 "num_base_bdevs_discovered": 1, 00:37:59.806 "num_base_bdevs_operational": 1, 00:37:59.806 "base_bdevs_list": [ 00:37:59.806 { 00:37:59.806 "name": null, 00:37:59.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.806 "is_configured": false, 00:37:59.806 "data_offset": 0, 00:37:59.806 "data_size": 63488 00:37:59.806 }, 00:37:59.806 { 00:37:59.806 "name": "BaseBdev2", 00:37:59.806 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:37:59.806 "is_configured": true, 00:37:59.806 "data_offset": 2048, 00:37:59.806 "data_size": 63488 00:37:59.806 } 00:37:59.806 ] 00:37:59.806 }' 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:59.806 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:00.372 "name": "raid_bdev1", 00:38:00.372 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:00.372 "strip_size_kb": 0, 00:38:00.372 "state": "online", 00:38:00.372 "raid_level": "raid1", 00:38:00.372 "superblock": true, 00:38:00.372 "num_base_bdevs": 2, 00:38:00.372 "num_base_bdevs_discovered": 1, 00:38:00.372 "num_base_bdevs_operational": 1, 00:38:00.372 "base_bdevs_list": [ 00:38:00.372 { 00:38:00.372 "name": null, 00:38:00.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.372 "is_configured": false, 00:38:00.372 "data_offset": 0, 00:38:00.372 "data_size": 63488 00:38:00.372 }, 00:38:00.372 { 00:38:00.372 "name": "BaseBdev2", 00:38:00.372 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:00.372 "is_configured": true, 00:38:00.372 "data_offset": 2048, 00:38:00.372 "data_size": 63488 00:38:00.372 } 00:38:00.372 ] 00:38:00.372 }' 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:00.372 [2024-11-26 17:35:00.927758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:00.372 [2024-11-26 17:35:00.944991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.372 17:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:38:00.372 [2024-11-26 17:35:00.946982] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:01.310 "name": "raid_bdev1", 00:38:01.310 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:01.310 "strip_size_kb": 0, 00:38:01.310 "state": "online", 00:38:01.310 "raid_level": "raid1", 00:38:01.310 "superblock": true, 00:38:01.310 "num_base_bdevs": 2, 00:38:01.310 "num_base_bdevs_discovered": 2, 00:38:01.310 "num_base_bdevs_operational": 2, 00:38:01.310 "process": { 00:38:01.310 "type": "rebuild", 00:38:01.310 "target": "spare", 00:38:01.310 "progress": { 00:38:01.310 "blocks": 20480, 00:38:01.310 "percent": 32 00:38:01.310 } 00:38:01.310 }, 00:38:01.310 "base_bdevs_list": [ 00:38:01.310 { 00:38:01.310 "name": "spare", 00:38:01.310 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:38:01.310 "is_configured": true, 00:38:01.310 "data_offset": 2048, 00:38:01.310 "data_size": 63488 00:38:01.310 }, 00:38:01.310 { 00:38:01.310 "name": "BaseBdev2", 00:38:01.310 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:01.310 "is_configured": true, 00:38:01.310 "data_offset": 2048, 00:38:01.310 "data_size": 63488 00:38:01.310 } 00:38:01.310 ] 00:38:01.310 }' 00:38:01.310 17:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:38:01.570 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:01.570 "name": "raid_bdev1", 00:38:01.570 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:01.570 "strip_size_kb": 0, 00:38:01.570 "state": "online", 00:38:01.570 "raid_level": "raid1", 00:38:01.570 "superblock": true, 00:38:01.570 "num_base_bdevs": 2, 00:38:01.570 "num_base_bdevs_discovered": 2, 00:38:01.570 "num_base_bdevs_operational": 2, 00:38:01.570 "process": { 00:38:01.570 "type": "rebuild", 00:38:01.570 "target": "spare", 00:38:01.570 "progress": { 00:38:01.570 "blocks": 22528, 00:38:01.570 "percent": 35 00:38:01.570 } 00:38:01.570 }, 00:38:01.570 "base_bdevs_list": [ 00:38:01.570 { 00:38:01.570 "name": "spare", 00:38:01.570 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:38:01.570 "is_configured": true, 00:38:01.570 "data_offset": 2048, 00:38:01.570 "data_size": 63488 00:38:01.570 }, 00:38:01.570 { 00:38:01.570 "name": "BaseBdev2", 00:38:01.570 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:01.570 "is_configured": true, 00:38:01.570 "data_offset": 2048, 00:38:01.570 "data_size": 63488 00:38:01.570 } 00:38:01.570 ] 00:38:01.570 }' 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:01.570 17:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:02.949 "name": "raid_bdev1", 00:38:02.949 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:02.949 "strip_size_kb": 0, 00:38:02.949 "state": "online", 00:38:02.949 "raid_level": "raid1", 00:38:02.949 "superblock": true, 00:38:02.949 "num_base_bdevs": 2, 00:38:02.949 "num_base_bdevs_discovered": 2, 00:38:02.949 "num_base_bdevs_operational": 2, 00:38:02.949 "process": { 00:38:02.949 "type": "rebuild", 00:38:02.949 "target": "spare", 00:38:02.949 "progress": { 00:38:02.949 "blocks": 45056, 00:38:02.949 "percent": 70 00:38:02.949 } 00:38:02.949 }, 00:38:02.949 "base_bdevs_list": [ 00:38:02.949 { 00:38:02.949 "name": "spare", 00:38:02.949 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:38:02.949 "is_configured": true, 00:38:02.949 "data_offset": 2048, 00:38:02.949 "data_size": 63488 00:38:02.949 }, 00:38:02.949 { 00:38:02.949 "name": "BaseBdev2", 00:38:02.949 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:02.949 "is_configured": true, 00:38:02.949 "data_offset": 2048, 00:38:02.949 "data_size": 63488 00:38:02.949 } 00:38:02.949 ] 00:38:02.949 }' 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:02.949 17:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:03.518 [2024-11-26 17:35:04.062266] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:03.518 [2024-11-26 17:35:04.062457] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:03.518 [2024-11-26 17:35:04.062645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:03.778 "name": "raid_bdev1", 00:38:03.778 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:03.778 "strip_size_kb": 0, 00:38:03.778 "state": "online", 00:38:03.778 "raid_level": "raid1", 00:38:03.778 "superblock": true, 00:38:03.778 "num_base_bdevs": 2, 00:38:03.778 "num_base_bdevs_discovered": 2, 00:38:03.778 "num_base_bdevs_operational": 2, 00:38:03.778 "base_bdevs_list": [ 00:38:03.778 { 00:38:03.778 "name": "spare", 00:38:03.778 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:38:03.778 "is_configured": true, 00:38:03.778 "data_offset": 2048, 00:38:03.778 "data_size": 63488 00:38:03.778 }, 00:38:03.778 { 00:38:03.778 "name": "BaseBdev2", 00:38:03.778 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:03.778 "is_configured": true, 00:38:03.778 "data_offset": 2048, 00:38:03.778 "data_size": 63488 00:38:03.778 } 00:38:03.778 ] 00:38:03.778 }' 00:38:03.778 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:04.037 "name": "raid_bdev1", 00:38:04.037 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:04.037 "strip_size_kb": 0, 00:38:04.037 "state": "online", 00:38:04.037 "raid_level": "raid1", 00:38:04.037 "superblock": true, 00:38:04.037 "num_base_bdevs": 2, 00:38:04.037 "num_base_bdevs_discovered": 2, 00:38:04.037 "num_base_bdevs_operational": 2, 00:38:04.037 "base_bdevs_list": [ 00:38:04.037 { 00:38:04.037 "name": "spare", 00:38:04.037 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:38:04.037 "is_configured": true, 00:38:04.037 "data_offset": 2048, 00:38:04.037 "data_size": 63488 00:38:04.037 }, 00:38:04.037 { 00:38:04.037 "name": "BaseBdev2", 00:38:04.037 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:04.037 "is_configured": true, 00:38:04.037 "data_offset": 2048, 00:38:04.037 "data_size": 63488 00:38:04.037 } 00:38:04.037 ] 00:38:04.037 }' 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:04.037 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:04.038 17:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.297 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:04.297 "name": "raid_bdev1", 00:38:04.297 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:04.297 "strip_size_kb": 0, 00:38:04.297 "state": "online", 00:38:04.297 "raid_level": "raid1", 00:38:04.297 "superblock": true, 00:38:04.297 "num_base_bdevs": 2, 00:38:04.297 "num_base_bdevs_discovered": 2, 00:38:04.297 "num_base_bdevs_operational": 2, 00:38:04.297 "base_bdevs_list": [ 00:38:04.297 { 00:38:04.297 "name": "spare", 00:38:04.297 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:38:04.297 "is_configured": true, 00:38:04.297 "data_offset": 2048, 00:38:04.297 "data_size": 63488 00:38:04.297 }, 00:38:04.297 { 00:38:04.297 "name": "BaseBdev2", 00:38:04.297 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:04.297 "is_configured": true, 00:38:04.297 "data_offset": 2048, 00:38:04.297 "data_size": 63488 00:38:04.297 } 00:38:04.297 ] 00:38:04.297 }' 00:38:04.297 17:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:04.297 17:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:04.557 [2024-11-26 17:35:05.184221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:04.557 [2024-11-26 17:35:05.184314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:04.557 [2024-11-26 17:35:05.184462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:04.557 [2024-11-26 17:35:05.184602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:04.557 [2024-11-26 17:35:05.184664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:04.557 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:38:04.558 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:04.558 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:04.558 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:04.558 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:38:04.558 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:04.558 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:04.558 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:04.817 /dev/nbd0 00:38:05.076 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:05.076 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:05.076 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:05.076 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:05.077 1+0 records in 00:38:05.077 1+0 records out 00:38:05.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266795 s, 15.4 MB/s 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:05.077 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:38:05.336 /dev/nbd1 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:05.336 1+0 records in 00:38:05.336 1+0 records out 00:38:05.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508689 s, 8.1 MB/s 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:05.336 17:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:38:05.596 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:38:05.596 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:05.596 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:05.596 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:05.596 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:38:05.596 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:05.596 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:05.856 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:05.856 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:05.856 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:05.856 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:05.856 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:05.856 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:05.856 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:05.856 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:05.856 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:05.856 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.115 [2024-11-26 17:35:06.588895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:06.115 [2024-11-26 17:35:06.588959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:06.115 [2024-11-26 17:35:06.588987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:06.115 [2024-11-26 17:35:06.588998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:06.115 [2024-11-26 17:35:06.591312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:06.115 [2024-11-26 17:35:06.591352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:06.115 [2024-11-26 17:35:06.591445] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:06.115 [2024-11-26 17:35:06.591499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:06.115 [2024-11-26 17:35:06.591668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:06.115 spare 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.115 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.115 [2024-11-26 17:35:06.691575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:38:06.115 [2024-11-26 17:35:06.691615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:38:06.115 [2024-11-26 17:35:06.691920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:38:06.116 [2024-11-26 17:35:06.692136] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:38:06.116 [2024-11-26 17:35:06.692155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:38:06.116 [2024-11-26 17:35:06.692340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:06.116 "name": "raid_bdev1", 00:38:06.116 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:06.116 "strip_size_kb": 0, 00:38:06.116 "state": "online", 00:38:06.116 "raid_level": "raid1", 00:38:06.116 "superblock": true, 00:38:06.116 "num_base_bdevs": 2, 00:38:06.116 "num_base_bdevs_discovered": 2, 00:38:06.116 "num_base_bdevs_operational": 2, 00:38:06.116 "base_bdevs_list": [ 00:38:06.116 { 00:38:06.116 "name": "spare", 00:38:06.116 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:38:06.116 "is_configured": true, 00:38:06.116 "data_offset": 2048, 00:38:06.116 "data_size": 63488 00:38:06.116 }, 00:38:06.116 { 00:38:06.116 "name": "BaseBdev2", 00:38:06.116 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:06.116 "is_configured": true, 00:38:06.116 "data_offset": 2048, 00:38:06.116 "data_size": 63488 00:38:06.116 } 00:38:06.116 ] 00:38:06.116 }' 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:06.116 17:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.684 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:06.684 "name": "raid_bdev1", 00:38:06.684 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:06.684 "strip_size_kb": 0, 00:38:06.684 "state": "online", 00:38:06.684 "raid_level": "raid1", 00:38:06.684 "superblock": true, 00:38:06.684 "num_base_bdevs": 2, 00:38:06.684 "num_base_bdevs_discovered": 2, 00:38:06.684 "num_base_bdevs_operational": 2, 00:38:06.684 "base_bdevs_list": [ 00:38:06.684 { 00:38:06.684 "name": "spare", 00:38:06.685 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:38:06.685 "is_configured": true, 00:38:06.685 "data_offset": 2048, 00:38:06.685 "data_size": 63488 00:38:06.685 }, 00:38:06.685 { 00:38:06.685 "name": "BaseBdev2", 00:38:06.685 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:06.685 "is_configured": true, 00:38:06.685 "data_offset": 2048, 00:38:06.685 "data_size": 63488 00:38:06.685 } 00:38:06.685 ] 00:38:06.685 }' 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.685 [2024-11-26 17:35:07.343903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.685 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.944 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:06.944 "name": "raid_bdev1", 00:38:06.944 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:06.944 "strip_size_kb": 0, 00:38:06.944 "state": "online", 00:38:06.944 "raid_level": "raid1", 00:38:06.944 "superblock": true, 00:38:06.944 "num_base_bdevs": 2, 00:38:06.944 "num_base_bdevs_discovered": 1, 00:38:06.944 "num_base_bdevs_operational": 1, 00:38:06.944 "base_bdevs_list": [ 00:38:06.944 { 00:38:06.944 "name": null, 00:38:06.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:06.944 "is_configured": false, 00:38:06.944 "data_offset": 0, 00:38:06.944 "data_size": 63488 00:38:06.944 }, 00:38:06.944 { 00:38:06.944 "name": "BaseBdev2", 00:38:06.944 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:06.944 "is_configured": true, 00:38:06.944 "data_offset": 2048, 00:38:06.944 "data_size": 63488 00:38:06.944 } 00:38:06.944 ] 00:38:06.944 }' 00:38:06.944 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:06.944 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:07.203 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:07.203 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.203 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:07.203 [2024-11-26 17:35:07.807208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:07.203 [2024-11-26 17:35:07.807487] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:07.203 [2024-11-26 17:35:07.807541] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:07.203 [2024-11-26 17:35:07.807605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:07.203 [2024-11-26 17:35:07.825026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:38:07.203 17:35:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:07.203 17:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:38:07.203 [2024-11-26 17:35:07.826904] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:08.139 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:08.139 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:08.139 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:08.139 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:08.139 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:08.399 "name": "raid_bdev1", 00:38:08.399 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:08.399 "strip_size_kb": 0, 00:38:08.399 "state": "online", 00:38:08.399 "raid_level": "raid1", 00:38:08.399 "superblock": true, 00:38:08.399 "num_base_bdevs": 2, 00:38:08.399 "num_base_bdevs_discovered": 2, 00:38:08.399 "num_base_bdevs_operational": 2, 00:38:08.399 "process": { 00:38:08.399 "type": "rebuild", 00:38:08.399 "target": "spare", 00:38:08.399 "progress": { 00:38:08.399 "blocks": 20480, 00:38:08.399 "percent": 32 00:38:08.399 } 00:38:08.399 }, 00:38:08.399 "base_bdevs_list": [ 00:38:08.399 { 00:38:08.399 "name": "spare", 00:38:08.399 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:38:08.399 "is_configured": true, 00:38:08.399 "data_offset": 2048, 00:38:08.399 "data_size": 63488 00:38:08.399 }, 00:38:08.399 { 00:38:08.399 "name": "BaseBdev2", 00:38:08.399 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:08.399 "is_configured": true, 00:38:08.399 "data_offset": 2048, 00:38:08.399 "data_size": 63488 00:38:08.399 } 00:38:08.399 ] 00:38:08.399 }' 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.399 17:35:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:08.399 [2024-11-26 17:35:08.982563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:08.399 [2024-11-26 17:35:09.032950] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:08.399 [2024-11-26 17:35:09.033049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:08.399 [2024-11-26 17:35:09.033068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:08.399 [2024-11-26 17:35:09.033080] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.399 17:35:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:08.657 17:35:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.657 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:08.657 "name": "raid_bdev1", 00:38:08.657 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:08.657 "strip_size_kb": 0, 00:38:08.657 "state": "online", 00:38:08.657 "raid_level": "raid1", 00:38:08.657 "superblock": true, 00:38:08.657 "num_base_bdevs": 2, 00:38:08.657 "num_base_bdevs_discovered": 1, 00:38:08.657 "num_base_bdevs_operational": 1, 00:38:08.657 "base_bdevs_list": [ 00:38:08.657 { 00:38:08.657 "name": null, 00:38:08.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:08.657 "is_configured": false, 00:38:08.658 "data_offset": 0, 00:38:08.658 "data_size": 63488 00:38:08.658 }, 00:38:08.658 { 00:38:08.658 "name": "BaseBdev2", 00:38:08.658 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:08.658 "is_configured": true, 00:38:08.658 "data_offset": 2048, 00:38:08.658 "data_size": 63488 00:38:08.658 } 00:38:08.658 ] 00:38:08.658 }' 00:38:08.658 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:08.658 17:35:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:08.915 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:08.915 17:35:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.915 17:35:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:08.915 [2024-11-26 17:35:09.573511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:08.915 [2024-11-26 17:35:09.573629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:08.915 [2024-11-26 17:35:09.573670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:38:08.915 [2024-11-26 17:35:09.573691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:08.915 [2024-11-26 17:35:09.574372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:08.915 [2024-11-26 17:35:09.574437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:08.915 [2024-11-26 17:35:09.574595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:08.915 [2024-11-26 17:35:09.574631] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:08.915 [2024-11-26 17:35:09.574649] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:08.915 [2024-11-26 17:35:09.574695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:08.915 spare 00:38:08.915 [2024-11-26 17:35:09.593472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:38:08.915 17:35:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.915 17:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:38:08.915 [2024-11-26 17:35:09.595624] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:10.290 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:10.290 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:10.290 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:10.290 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:10.290 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:10.290 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:10.290 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:10.290 17:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.290 17:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:10.291 "name": "raid_bdev1", 00:38:10.291 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:10.291 "strip_size_kb": 0, 00:38:10.291 "state": "online", 00:38:10.291 "raid_level": "raid1", 00:38:10.291 "superblock": true, 00:38:10.291 "num_base_bdevs": 2, 00:38:10.291 "num_base_bdevs_discovered": 2, 00:38:10.291 "num_base_bdevs_operational": 2, 00:38:10.291 "process": { 00:38:10.291 "type": "rebuild", 00:38:10.291 "target": "spare", 00:38:10.291 "progress": { 00:38:10.291 "blocks": 20480, 00:38:10.291 "percent": 32 00:38:10.291 } 00:38:10.291 }, 00:38:10.291 "base_bdevs_list": [ 00:38:10.291 { 00:38:10.291 "name": "spare", 00:38:10.291 "uuid": "25a43e0d-def4-5c41-8e50-410d1e16a919", 00:38:10.291 "is_configured": true, 00:38:10.291 "data_offset": 2048, 00:38:10.291 "data_size": 63488 00:38:10.291 }, 00:38:10.291 { 00:38:10.291 "name": "BaseBdev2", 00:38:10.291 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:10.291 "is_configured": true, 00:38:10.291 "data_offset": 2048, 00:38:10.291 "data_size": 63488 00:38:10.291 } 00:38:10.291 ] 00:38:10.291 }' 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:10.291 [2024-11-26 17:35:10.751356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:10.291 [2024-11-26 17:35:10.801587] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:10.291 [2024-11-26 17:35:10.801653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:10.291 [2024-11-26 17:35:10.801671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:10.291 [2024-11-26 17:35:10.801694] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:10.291 "name": "raid_bdev1", 00:38:10.291 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:10.291 "strip_size_kb": 0, 00:38:10.291 "state": "online", 00:38:10.291 "raid_level": "raid1", 00:38:10.291 "superblock": true, 00:38:10.291 "num_base_bdevs": 2, 00:38:10.291 "num_base_bdevs_discovered": 1, 00:38:10.291 "num_base_bdevs_operational": 1, 00:38:10.291 "base_bdevs_list": [ 00:38:10.291 { 00:38:10.291 "name": null, 00:38:10.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:10.291 "is_configured": false, 00:38:10.291 "data_offset": 0, 00:38:10.291 "data_size": 63488 00:38:10.291 }, 00:38:10.291 { 00:38:10.291 "name": "BaseBdev2", 00:38:10.291 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:10.291 "is_configured": true, 00:38:10.291 "data_offset": 2048, 00:38:10.291 "data_size": 63488 00:38:10.291 } 00:38:10.291 ] 00:38:10.291 }' 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:10.291 17:35:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:10.857 "name": "raid_bdev1", 00:38:10.857 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:10.857 "strip_size_kb": 0, 00:38:10.857 "state": "online", 00:38:10.857 "raid_level": "raid1", 00:38:10.857 "superblock": true, 00:38:10.857 "num_base_bdevs": 2, 00:38:10.857 "num_base_bdevs_discovered": 1, 00:38:10.857 "num_base_bdevs_operational": 1, 00:38:10.857 "base_bdevs_list": [ 00:38:10.857 { 00:38:10.857 "name": null, 00:38:10.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:10.857 "is_configured": false, 00:38:10.857 "data_offset": 0, 00:38:10.857 "data_size": 63488 00:38:10.857 }, 00:38:10.857 { 00:38:10.857 "name": "BaseBdev2", 00:38:10.857 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:10.857 "is_configured": true, 00:38:10.857 "data_offset": 2048, 00:38:10.857 "data_size": 63488 00:38:10.857 } 00:38:10.857 ] 00:38:10.857 }' 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.857 17:35:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:10.857 [2024-11-26 17:35:11.489692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:10.857 [2024-11-26 17:35:11.489776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:10.857 [2024-11-26 17:35:11.489809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:38:10.857 [2024-11-26 17:35:11.489831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:10.857 [2024-11-26 17:35:11.490361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:10.857 [2024-11-26 17:35:11.490390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:10.857 [2024-11-26 17:35:11.490491] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:10.858 [2024-11-26 17:35:11.490531] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:10.858 [2024-11-26 17:35:11.490542] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:10.858 [2024-11-26 17:35:11.490555] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:38:10.858 BaseBdev1 00:38:10.858 17:35:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.858 17:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:12.233 "name": "raid_bdev1", 00:38:12.233 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:12.233 "strip_size_kb": 0, 00:38:12.233 "state": "online", 00:38:12.233 "raid_level": "raid1", 00:38:12.233 "superblock": true, 00:38:12.233 "num_base_bdevs": 2, 00:38:12.233 "num_base_bdevs_discovered": 1, 00:38:12.233 "num_base_bdevs_operational": 1, 00:38:12.233 "base_bdevs_list": [ 00:38:12.233 { 00:38:12.233 "name": null, 00:38:12.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:12.233 "is_configured": false, 00:38:12.233 "data_offset": 0, 00:38:12.233 "data_size": 63488 00:38:12.233 }, 00:38:12.233 { 00:38:12.233 "name": "BaseBdev2", 00:38:12.233 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:12.233 "is_configured": true, 00:38:12.233 "data_offset": 2048, 00:38:12.233 "data_size": 63488 00:38:12.233 } 00:38:12.233 ] 00:38:12.233 }' 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:12.233 17:35:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:12.494 "name": "raid_bdev1", 00:38:12.494 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:12.494 "strip_size_kb": 0, 00:38:12.494 "state": "online", 00:38:12.494 "raid_level": "raid1", 00:38:12.494 "superblock": true, 00:38:12.494 "num_base_bdevs": 2, 00:38:12.494 "num_base_bdevs_discovered": 1, 00:38:12.494 "num_base_bdevs_operational": 1, 00:38:12.494 "base_bdevs_list": [ 00:38:12.494 { 00:38:12.494 "name": null, 00:38:12.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:12.494 "is_configured": false, 00:38:12.494 "data_offset": 0, 00:38:12.494 "data_size": 63488 00:38:12.494 }, 00:38:12.494 { 00:38:12.494 "name": "BaseBdev2", 00:38:12.494 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:12.494 "is_configured": true, 00:38:12.494 "data_offset": 2048, 00:38:12.494 "data_size": 63488 00:38:12.494 } 00:38:12.494 ] 00:38:12.494 }' 00:38:12.494 17:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:12.494 [2024-11-26 17:35:13.095077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:12.494 [2024-11-26 17:35:13.095270] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:12.494 [2024-11-26 17:35:13.095298] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:12.494 request: 00:38:12.494 { 00:38:12.494 "base_bdev": "BaseBdev1", 00:38:12.494 "raid_bdev": "raid_bdev1", 00:38:12.494 "method": "bdev_raid_add_base_bdev", 00:38:12.494 "req_id": 1 00:38:12.494 } 00:38:12.494 Got JSON-RPC error response 00:38:12.494 response: 00:38:12.494 { 00:38:12.494 "code": -22, 00:38:12.494 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:12.494 } 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:12.494 17:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.429 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.687 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.687 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:13.687 "name": "raid_bdev1", 00:38:13.687 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:13.687 "strip_size_kb": 0, 00:38:13.687 "state": "online", 00:38:13.687 "raid_level": "raid1", 00:38:13.687 "superblock": true, 00:38:13.687 "num_base_bdevs": 2, 00:38:13.687 "num_base_bdevs_discovered": 1, 00:38:13.687 "num_base_bdevs_operational": 1, 00:38:13.687 "base_bdevs_list": [ 00:38:13.687 { 00:38:13.687 "name": null, 00:38:13.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:13.687 "is_configured": false, 00:38:13.687 "data_offset": 0, 00:38:13.687 "data_size": 63488 00:38:13.687 }, 00:38:13.687 { 00:38:13.687 "name": "BaseBdev2", 00:38:13.687 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:13.687 "is_configured": true, 00:38:13.687 "data_offset": 2048, 00:38:13.687 "data_size": 63488 00:38:13.687 } 00:38:13.687 ] 00:38:13.687 }' 00:38:13.687 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:13.687 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.950 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:13.950 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:13.950 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:13.950 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:13.950 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:13.950 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:13.950 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:13.950 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.950 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:13.950 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:14.209 "name": "raid_bdev1", 00:38:14.209 "uuid": "64720f27-ec9a-45a8-ab28-700595bcb6d7", 00:38:14.209 "strip_size_kb": 0, 00:38:14.209 "state": "online", 00:38:14.209 "raid_level": "raid1", 00:38:14.209 "superblock": true, 00:38:14.209 "num_base_bdevs": 2, 00:38:14.209 "num_base_bdevs_discovered": 1, 00:38:14.209 "num_base_bdevs_operational": 1, 00:38:14.209 "base_bdevs_list": [ 00:38:14.209 { 00:38:14.209 "name": null, 00:38:14.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:14.209 "is_configured": false, 00:38:14.209 "data_offset": 0, 00:38:14.209 "data_size": 63488 00:38:14.209 }, 00:38:14.209 { 00:38:14.209 "name": "BaseBdev2", 00:38:14.209 "uuid": "0962900f-3884-5680-94fa-73dc405b7526", 00:38:14.209 "is_configured": true, 00:38:14.209 "data_offset": 2048, 00:38:14.209 "data_size": 63488 00:38:14.209 } 00:38:14.209 ] 00:38:14.209 }' 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76025 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76025 ']' 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76025 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76025 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:14.209 killing process with pid 76025 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76025' 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76025 00:38:14.209 Received shutdown signal, test time was about 60.000000 seconds 00:38:14.209 00:38:14.209 Latency(us) 00:38:14.209 [2024-11-26T17:35:14.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:14.209 [2024-11-26T17:35:14.904Z] =================================================================================================================== 00:38:14.209 [2024-11-26T17:35:14.904Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:14.209 [2024-11-26 17:35:14.791343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:14.209 [2024-11-26 17:35:14.791499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:14.209 [2024-11-26 17:35:14.791576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:14.209 [2024-11-26 17:35:14.791591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:38:14.209 17:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76025 00:38:14.466 [2024-11-26 17:35:15.113013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:38:15.842 00:38:15.842 real 0m24.056s 00:38:15.842 user 0m29.601s 00:38:15.842 sys 0m3.855s 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:15.842 ************************************ 00:38:15.842 END TEST raid_rebuild_test_sb 00:38:15.842 ************************************ 00:38:15.842 17:35:16 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:38:15.842 17:35:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:38:15.842 17:35:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:15.842 17:35:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:15.842 ************************************ 00:38:15.842 START TEST raid_rebuild_test_io 00:38:15.842 ************************************ 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76763 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76763 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76763 ']' 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:15.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:15.842 17:35:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:16.102 [2024-11-26 17:35:16.585765] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:38:16.102 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:16.102 Zero copy mechanism will not be used. 00:38:16.102 [2024-11-26 17:35:16.585907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76763 ] 00:38:16.102 [2024-11-26 17:35:16.765815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.362 [2024-11-26 17:35:16.901629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.621 [2024-11-26 17:35:17.150724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:16.621 [2024-11-26 17:35:17.150775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:16.880 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:16.880 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:38:16.880 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:16.880 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:16.880 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.880 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:16.880 BaseBdev1_malloc 00:38:16.881 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.881 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:16.881 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.881 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:16.881 [2024-11-26 17:35:17.547124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:16.881 [2024-11-26 17:35:17.547208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:16.881 [2024-11-26 17:35:17.547271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:16.881 [2024-11-26 17:35:17.547298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:16.881 [2024-11-26 17:35:17.549753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:16.881 [2024-11-26 17:35:17.549802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:16.881 BaseBdev1 00:38:16.881 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.881 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:16.881 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:16.881 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.881 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.139 BaseBdev2_malloc 00:38:17.139 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.140 [2024-11-26 17:35:17.610090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:17.140 [2024-11-26 17:35:17.610179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:17.140 [2024-11-26 17:35:17.610222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:17.140 [2024-11-26 17:35:17.610247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:17.140 [2024-11-26 17:35:17.612735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:17.140 [2024-11-26 17:35:17.612784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:17.140 BaseBdev2 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.140 spare_malloc 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.140 spare_delay 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.140 [2024-11-26 17:35:17.699783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:17.140 [2024-11-26 17:35:17.699885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:17.140 [2024-11-26 17:35:17.699928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:38:17.140 [2024-11-26 17:35:17.699959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:17.140 [2024-11-26 17:35:17.702480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:17.140 [2024-11-26 17:35:17.702553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:17.140 spare 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.140 [2024-11-26 17:35:17.711824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:17.140 [2024-11-26 17:35:17.713915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:17.140 [2024-11-26 17:35:17.714071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:38:17.140 [2024-11-26 17:35:17.714102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:38:17.140 [2024-11-26 17:35:17.714470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:38:17.140 [2024-11-26 17:35:17.714753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:38:17.140 [2024-11-26 17:35:17.714778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:38:17.140 [2024-11-26 17:35:17.714988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:17.140 "name": "raid_bdev1", 00:38:17.140 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:17.140 "strip_size_kb": 0, 00:38:17.140 "state": "online", 00:38:17.140 "raid_level": "raid1", 00:38:17.140 "superblock": false, 00:38:17.140 "num_base_bdevs": 2, 00:38:17.140 "num_base_bdevs_discovered": 2, 00:38:17.140 "num_base_bdevs_operational": 2, 00:38:17.140 "base_bdevs_list": [ 00:38:17.140 { 00:38:17.140 "name": "BaseBdev1", 00:38:17.140 "uuid": "256abae7-aca4-52db-a0f0-0751631a1b45", 00:38:17.140 "is_configured": true, 00:38:17.140 "data_offset": 0, 00:38:17.140 "data_size": 65536 00:38:17.140 }, 00:38:17.140 { 00:38:17.140 "name": "BaseBdev2", 00:38:17.140 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:17.140 "is_configured": true, 00:38:17.140 "data_offset": 0, 00:38:17.140 "data_size": 65536 00:38:17.140 } 00:38:17.140 ] 00:38:17.140 }' 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:17.140 17:35:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.709 [2024-11-26 17:35:18.159396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.709 [2024-11-26 17:35:18.258899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:17.709 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:17.710 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.710 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.710 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:17.710 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.710 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:17.710 "name": "raid_bdev1", 00:38:17.710 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:17.710 "strip_size_kb": 0, 00:38:17.710 "state": "online", 00:38:17.710 "raid_level": "raid1", 00:38:17.710 "superblock": false, 00:38:17.710 "num_base_bdevs": 2, 00:38:17.710 "num_base_bdevs_discovered": 1, 00:38:17.710 "num_base_bdevs_operational": 1, 00:38:17.710 "base_bdevs_list": [ 00:38:17.710 { 00:38:17.710 "name": null, 00:38:17.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:17.710 "is_configured": false, 00:38:17.710 "data_offset": 0, 00:38:17.710 "data_size": 65536 00:38:17.710 }, 00:38:17.710 { 00:38:17.710 "name": "BaseBdev2", 00:38:17.710 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:17.710 "is_configured": true, 00:38:17.710 "data_offset": 0, 00:38:17.710 "data_size": 65536 00:38:17.710 } 00:38:17.710 ] 00:38:17.710 }' 00:38:17.710 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:17.710 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:17.710 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:17.710 Zero copy mechanism will not be used. 00:38:17.710 Running I/O for 60 seconds... 00:38:17.710 [2024-11-26 17:35:18.396682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:38:18.278 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:18.278 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:18.278 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:18.278 [2024-11-26 17:35:18.717686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:18.278 17:35:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:18.278 17:35:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:38:18.278 [2024-11-26 17:35:18.804923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:38:18.278 [2024-11-26 17:35:18.807130] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:18.278 [2024-11-26 17:35:18.909944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:18.278 [2024-11-26 17:35:18.910585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:18.537 [2024-11-26 17:35:19.114217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:18.537 [2024-11-26 17:35:19.114601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:18.796 145.00 IOPS, 435.00 MiB/s [2024-11-26T17:35:19.491Z] [2024-11-26 17:35:19.450368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:38:18.796 [2024-11-26 17:35:19.451034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:38:19.054 [2024-11-26 17:35:19.568024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:19.313 [2024-11-26 17:35:19.795024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:38:19.313 [2024-11-26 17:35:19.795635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:19.313 "name": "raid_bdev1", 00:38:19.313 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:19.313 "strip_size_kb": 0, 00:38:19.313 "state": "online", 00:38:19.313 "raid_level": "raid1", 00:38:19.313 "superblock": false, 00:38:19.313 "num_base_bdevs": 2, 00:38:19.313 "num_base_bdevs_discovered": 2, 00:38:19.313 "num_base_bdevs_operational": 2, 00:38:19.313 "process": { 00:38:19.313 "type": "rebuild", 00:38:19.313 "target": "spare", 00:38:19.313 "progress": { 00:38:19.313 "blocks": 12288, 00:38:19.313 "percent": 18 00:38:19.313 } 00:38:19.313 }, 00:38:19.313 "base_bdevs_list": [ 00:38:19.313 { 00:38:19.313 "name": "spare", 00:38:19.313 "uuid": "7f16a935-58bc-5e21-a6b3-a49c51169078", 00:38:19.313 "is_configured": true, 00:38:19.313 "data_offset": 0, 00:38:19.313 "data_size": 65536 00:38:19.313 }, 00:38:19.313 { 00:38:19.313 "name": "BaseBdev2", 00:38:19.313 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:19.313 "is_configured": true, 00:38:19.313 "data_offset": 0, 00:38:19.313 "data_size": 65536 00:38:19.313 } 00:38:19.313 ] 00:38:19.313 }' 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.313 17:35:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:19.313 [2024-11-26 17:35:19.935463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:19.572 [2024-11-26 17:35:20.077161] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:19.572 [2024-11-26 17:35:20.087281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:19.572 [2024-11-26 17:35:20.087353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:19.572 [2024-11-26 17:35:20.087369] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:19.572 [2024-11-26 17:35:20.125375] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:38:19.572 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.572 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:19.572 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:19.572 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:19.572 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:19.572 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:19.573 "name": "raid_bdev1", 00:38:19.573 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:19.573 "strip_size_kb": 0, 00:38:19.573 "state": "online", 00:38:19.573 "raid_level": "raid1", 00:38:19.573 "superblock": false, 00:38:19.573 "num_base_bdevs": 2, 00:38:19.573 "num_base_bdevs_discovered": 1, 00:38:19.573 "num_base_bdevs_operational": 1, 00:38:19.573 "base_bdevs_list": [ 00:38:19.573 { 00:38:19.573 "name": null, 00:38:19.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:19.573 "is_configured": false, 00:38:19.573 "data_offset": 0, 00:38:19.573 "data_size": 65536 00:38:19.573 }, 00:38:19.573 { 00:38:19.573 "name": "BaseBdev2", 00:38:19.573 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:19.573 "is_configured": true, 00:38:19.573 "data_offset": 0, 00:38:19.573 "data_size": 65536 00:38:19.573 } 00:38:19.573 ] 00:38:19.573 }' 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:19.573 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:20.090 150.00 IOPS, 450.00 MiB/s [2024-11-26T17:35:20.785Z] 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:20.090 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:20.090 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:20.090 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:20.090 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:20.090 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:20.090 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.090 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:20.090 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:20.090 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.090 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:20.090 "name": "raid_bdev1", 00:38:20.090 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:20.090 "strip_size_kb": 0, 00:38:20.090 "state": "online", 00:38:20.090 "raid_level": "raid1", 00:38:20.090 "superblock": false, 00:38:20.090 "num_base_bdevs": 2, 00:38:20.091 "num_base_bdevs_discovered": 1, 00:38:20.091 "num_base_bdevs_operational": 1, 00:38:20.091 "base_bdevs_list": [ 00:38:20.091 { 00:38:20.091 "name": null, 00:38:20.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:20.091 "is_configured": false, 00:38:20.091 "data_offset": 0, 00:38:20.091 "data_size": 65536 00:38:20.091 }, 00:38:20.091 { 00:38:20.091 "name": "BaseBdev2", 00:38:20.091 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:20.091 "is_configured": true, 00:38:20.091 "data_offset": 0, 00:38:20.091 "data_size": 65536 00:38:20.091 } 00:38:20.091 ] 00:38:20.091 }' 00:38:20.091 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:20.091 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:20.091 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:20.091 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:20.091 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:20.091 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.091 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:20.091 [2024-11-26 17:35:20.754022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:20.350 17:35:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.350 17:35:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:38:20.350 [2024-11-26 17:35:20.850619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:38:20.350 [2024-11-26 17:35:20.852863] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:20.350 [2024-11-26 17:35:20.969996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:20.350 [2024-11-26 17:35:20.970614] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:20.609 [2024-11-26 17:35:21.187189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:20.609 [2024-11-26 17:35:21.187572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:20.869 147.67 IOPS, 443.00 MiB/s [2024-11-26T17:35:21.564Z] [2024-11-26 17:35:21.534098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:38:20.869 [2024-11-26 17:35:21.534748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:38:21.129 [2024-11-26 17:35:21.671056] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:38:21.129 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:21.129 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:21.129 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:21.129 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:21.129 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:21.129 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:21.129 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:21.129 17:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:21.389 "name": "raid_bdev1", 00:38:21.389 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:21.389 "strip_size_kb": 0, 00:38:21.389 "state": "online", 00:38:21.389 "raid_level": "raid1", 00:38:21.389 "superblock": false, 00:38:21.389 "num_base_bdevs": 2, 00:38:21.389 "num_base_bdevs_discovered": 2, 00:38:21.389 "num_base_bdevs_operational": 2, 00:38:21.389 "process": { 00:38:21.389 "type": "rebuild", 00:38:21.389 "target": "spare", 00:38:21.389 "progress": { 00:38:21.389 "blocks": 10240, 00:38:21.389 "percent": 15 00:38:21.389 } 00:38:21.389 }, 00:38:21.389 "base_bdevs_list": [ 00:38:21.389 { 00:38:21.389 "name": "spare", 00:38:21.389 "uuid": "7f16a935-58bc-5e21-a6b3-a49c51169078", 00:38:21.389 "is_configured": true, 00:38:21.389 "data_offset": 0, 00:38:21.389 "data_size": 65536 00:38:21.389 }, 00:38:21.389 { 00:38:21.389 "name": "BaseBdev2", 00:38:21.389 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:21.389 "is_configured": true, 00:38:21.389 "data_offset": 0, 00:38:21.389 "data_size": 65536 00:38:21.389 } 00:38:21.389 ] 00:38:21.389 }' 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:21.389 17:35:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.389 [2024-11-26 17:35:22.002469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:38:21.389 17:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:21.389 "name": "raid_bdev1", 00:38:21.389 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:21.389 "strip_size_kb": 0, 00:38:21.389 "state": "online", 00:38:21.389 "raid_level": "raid1", 00:38:21.389 "superblock": false, 00:38:21.389 "num_base_bdevs": 2, 00:38:21.389 "num_base_bdevs_discovered": 2, 00:38:21.389 "num_base_bdevs_operational": 2, 00:38:21.389 "process": { 00:38:21.389 "type": "rebuild", 00:38:21.389 "target": "spare", 00:38:21.389 "progress": { 00:38:21.389 "blocks": 12288, 00:38:21.389 "percent": 18 00:38:21.389 } 00:38:21.389 }, 00:38:21.389 "base_bdevs_list": [ 00:38:21.389 { 00:38:21.389 "name": "spare", 00:38:21.389 "uuid": "7f16a935-58bc-5e21-a6b3-a49c51169078", 00:38:21.389 "is_configured": true, 00:38:21.389 "data_offset": 0, 00:38:21.389 "data_size": 65536 00:38:21.389 }, 00:38:21.389 { 00:38:21.389 "name": "BaseBdev2", 00:38:21.389 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:21.389 "is_configured": true, 00:38:21.389 "data_offset": 0, 00:38:21.389 "data_size": 65536 00:38:21.389 } 00:38:21.389 ] 00:38:21.389 }' 00:38:21.389 17:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:21.389 17:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:21.389 17:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:21.648 17:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:21.648 17:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:21.648 [2024-11-26 17:35:22.140183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:38:21.907 132.25 IOPS, 396.75 MiB/s [2024-11-26T17:35:22.602Z] [2024-11-26 17:35:22.498330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:38:22.165 [2024-11-26 17:35:22.707733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:38:22.165 [2024-11-26 17:35:22.708104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:38:22.423 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:22.423 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:22.423 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:22.423 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:22.423 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:22.423 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:22.423 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:22.423 17:35:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:22.423 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:22.423 17:35:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:22.423 [2024-11-26 17:35:23.100812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:38:22.683 17:35:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:22.683 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:22.683 "name": "raid_bdev1", 00:38:22.683 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:22.683 "strip_size_kb": 0, 00:38:22.683 "state": "online", 00:38:22.683 "raid_level": "raid1", 00:38:22.683 "superblock": false, 00:38:22.683 "num_base_bdevs": 2, 00:38:22.683 "num_base_bdevs_discovered": 2, 00:38:22.683 "num_base_bdevs_operational": 2, 00:38:22.683 "process": { 00:38:22.683 "type": "rebuild", 00:38:22.683 "target": "spare", 00:38:22.683 "progress": { 00:38:22.683 "blocks": 28672, 00:38:22.683 "percent": 43 00:38:22.683 } 00:38:22.683 }, 00:38:22.683 "base_bdevs_list": [ 00:38:22.683 { 00:38:22.683 "name": "spare", 00:38:22.683 "uuid": "7f16a935-58bc-5e21-a6b3-a49c51169078", 00:38:22.683 "is_configured": true, 00:38:22.683 "data_offset": 0, 00:38:22.683 "data_size": 65536 00:38:22.683 }, 00:38:22.683 { 00:38:22.683 "name": "BaseBdev2", 00:38:22.683 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:22.683 "is_configured": true, 00:38:22.683 "data_offset": 0, 00:38:22.683 "data_size": 65536 00:38:22.683 } 00:38:22.683 ] 00:38:22.683 }' 00:38:22.683 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:22.683 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:22.683 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:22.683 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:22.683 17:35:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:23.879 114.40 IOPS, 343.20 MiB/s [2024-11-26T17:35:24.574Z] 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:23.879 "name": "raid_bdev1", 00:38:23.879 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:23.879 "strip_size_kb": 0, 00:38:23.879 "state": "online", 00:38:23.879 "raid_level": "raid1", 00:38:23.879 "superblock": false, 00:38:23.879 "num_base_bdevs": 2, 00:38:23.879 "num_base_bdevs_discovered": 2, 00:38:23.879 "num_base_bdevs_operational": 2, 00:38:23.879 "process": { 00:38:23.879 "type": "rebuild", 00:38:23.879 "target": "spare", 00:38:23.879 "progress": { 00:38:23.879 "blocks": 49152, 00:38:23.879 "percent": 75 00:38:23.879 } 00:38:23.879 }, 00:38:23.879 "base_bdevs_list": [ 00:38:23.879 { 00:38:23.879 "name": "spare", 00:38:23.879 "uuid": "7f16a935-58bc-5e21-a6b3-a49c51169078", 00:38:23.879 "is_configured": true, 00:38:23.879 "data_offset": 0, 00:38:23.879 "data_size": 65536 00:38:23.879 }, 00:38:23.879 { 00:38:23.879 "name": "BaseBdev2", 00:38:23.879 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:23.879 "is_configured": true, 00:38:23.879 "data_offset": 0, 00:38:23.879 "data_size": 65536 00:38:23.879 } 00:38:23.879 ] 00:38:23.879 }' 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:23.879 17:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:24.138 101.17 IOPS, 303.50 MiB/s [2024-11-26T17:35:24.833Z] [2024-11-26 17:35:24.729070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:38:24.705 [2024-11-26 17:35:25.165367] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:24.705 [2024-11-26 17:35:25.265304] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:24.705 [2024-11-26 17:35:25.267935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:24.705 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:24.705 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:24.705 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:24.705 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:24.705 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:24.705 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:24.705 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:24.705 17:35:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.705 91.00 IOPS, 273.00 MiB/s [2024-11-26T17:35:25.400Z] 17:35:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:24.705 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:24.964 "name": "raid_bdev1", 00:38:24.964 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:24.964 "strip_size_kb": 0, 00:38:24.964 "state": "online", 00:38:24.964 "raid_level": "raid1", 00:38:24.964 "superblock": false, 00:38:24.964 "num_base_bdevs": 2, 00:38:24.964 "num_base_bdevs_discovered": 2, 00:38:24.964 "num_base_bdevs_operational": 2, 00:38:24.964 "base_bdevs_list": [ 00:38:24.964 { 00:38:24.964 "name": "spare", 00:38:24.964 "uuid": "7f16a935-58bc-5e21-a6b3-a49c51169078", 00:38:24.964 "is_configured": true, 00:38:24.964 "data_offset": 0, 00:38:24.964 "data_size": 65536 00:38:24.964 }, 00:38:24.964 { 00:38:24.964 "name": "BaseBdev2", 00:38:24.964 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:24.964 "is_configured": true, 00:38:24.964 "data_offset": 0, 00:38:24.964 "data_size": 65536 00:38:24.964 } 00:38:24.964 ] 00:38:24.964 }' 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:24.964 "name": "raid_bdev1", 00:38:24.964 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:24.964 "strip_size_kb": 0, 00:38:24.964 "state": "online", 00:38:24.964 "raid_level": "raid1", 00:38:24.964 "superblock": false, 00:38:24.964 "num_base_bdevs": 2, 00:38:24.964 "num_base_bdevs_discovered": 2, 00:38:24.964 "num_base_bdevs_operational": 2, 00:38:24.964 "base_bdevs_list": [ 00:38:24.964 { 00:38:24.964 "name": "spare", 00:38:24.964 "uuid": "7f16a935-58bc-5e21-a6b3-a49c51169078", 00:38:24.964 "is_configured": true, 00:38:24.964 "data_offset": 0, 00:38:24.964 "data_size": 65536 00:38:24.964 }, 00:38:24.964 { 00:38:24.964 "name": "BaseBdev2", 00:38:24.964 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:24.964 "is_configured": true, 00:38:24.964 "data_offset": 0, 00:38:24.964 "data_size": 65536 00:38:24.964 } 00:38:24.964 ] 00:38:24.964 }' 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:24.964 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:24.965 17:35:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.223 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:25.223 "name": "raid_bdev1", 00:38:25.223 "uuid": "cf5c9cea-fef3-428d-8ac1-2b39e69bcbc9", 00:38:25.223 "strip_size_kb": 0, 00:38:25.223 "state": "online", 00:38:25.223 "raid_level": "raid1", 00:38:25.223 "superblock": false, 00:38:25.223 "num_base_bdevs": 2, 00:38:25.223 "num_base_bdevs_discovered": 2, 00:38:25.223 "num_base_bdevs_operational": 2, 00:38:25.223 "base_bdevs_list": [ 00:38:25.223 { 00:38:25.223 "name": "spare", 00:38:25.223 "uuid": "7f16a935-58bc-5e21-a6b3-a49c51169078", 00:38:25.223 "is_configured": true, 00:38:25.223 "data_offset": 0, 00:38:25.223 "data_size": 65536 00:38:25.223 }, 00:38:25.223 { 00:38:25.223 "name": "BaseBdev2", 00:38:25.223 "uuid": "a7bea67a-5e04-5cad-a609-0db2ba1500b4", 00:38:25.223 "is_configured": true, 00:38:25.223 "data_offset": 0, 00:38:25.223 "data_size": 65536 00:38:25.223 } 00:38:25.223 ] 00:38:25.223 }' 00:38:25.223 17:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:25.223 17:35:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:25.482 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:25.482 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.482 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:25.482 [2024-11-26 17:35:26.024499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:25.482 [2024-11-26 17:35:26.024548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:25.482 00:38:25.482 Latency(us) 00:38:25.482 [2024-11-26T17:35:26.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:25.482 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:38:25.482 raid_bdev1 : 7.74 85.40 256.19 0.00 0.00 15347.85 347.00 116762.83 00:38:25.482 [2024-11-26T17:35:26.177Z] =================================================================================================================== 00:38:25.482 [2024-11-26T17:35:26.177Z] Total : 85.40 256.19 0.00 0.00 15347.85 347.00 116762.83 00:38:25.482 [2024-11-26 17:35:26.147777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:25.482 [2024-11-26 17:35:26.147854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:25.482 [2024-11-26 17:35:26.147928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:25.482 [2024-11-26 17:35:26.147957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:38:25.482 { 00:38:25.482 "results": [ 00:38:25.482 { 00:38:25.482 "job": "raid_bdev1", 00:38:25.482 "core_mask": "0x1", 00:38:25.482 "workload": "randrw", 00:38:25.482 "percentage": 50, 00:38:25.482 "status": "finished", 00:38:25.482 "queue_depth": 2, 00:38:25.482 "io_size": 3145728, 00:38:25.482 "runtime": 7.740206, 00:38:25.482 "iops": 85.39824392270697, 00:38:25.482 "mibps": 256.1947317681209, 00:38:25.482 "io_failed": 0, 00:38:25.482 "io_timeout": 0, 00:38:25.482 "avg_latency_us": 15347.852899867212, 00:38:25.482 "min_latency_us": 346.99737991266375, 00:38:25.482 "max_latency_us": 116762.82969432314 00:38:25.482 } 00:38:25.482 ], 00:38:25.482 "core_count": 1 00:38:25.482 } 00:38:25.482 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.482 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:25.482 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:25.482 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:38:25.482 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:25.482 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:25.741 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:38:25.741 /dev/nbd0 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:26.000 1+0 records in 00:38:26.000 1+0 records out 00:38:26.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440529 s, 9.3 MB/s 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:26.000 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:38:26.258 /dev/nbd1 00:38:26.258 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:26.258 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:26.259 1+0 records in 00:38:26.259 1+0 records out 00:38:26.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467531 s, 8.8 MB/s 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:26.259 17:35:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:26.517 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76763 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76763 ']' 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76763 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76763 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:26.777 killing process with pid 76763 00:38:26.777 17:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76763' 00:38:26.778 17:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76763 00:38:26.778 Received shutdown signal, test time was about 9.050309 seconds 00:38:26.778 00:38:26.778 Latency(us) 00:38:26.778 [2024-11-26T17:35:27.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.778 [2024-11-26T17:35:27.473Z] =================================================================================================================== 00:38:26.778 [2024-11-26T17:35:27.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:26.778 [2024-11-26 17:35:27.431852] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:26.778 17:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76763 00:38:27.038 [2024-11-26 17:35:27.669436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:38:28.525 00:38:28.525 real 0m12.387s 00:38:28.525 user 0m15.599s 00:38:28.525 sys 0m1.489s 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:38:28.525 ************************************ 00:38:28.525 END TEST raid_rebuild_test_io 00:38:28.525 ************************************ 00:38:28.525 17:35:28 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:38:28.525 17:35:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:38:28.525 17:35:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:28.525 17:35:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:28.525 ************************************ 00:38:28.525 START TEST raid_rebuild_test_sb_io 00:38:28.525 ************************************ 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77139 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77139 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77139 ']' 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:28.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:28.525 17:35:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:28.525 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:28.525 Zero copy mechanism will not be used. 00:38:28.525 [2024-11-26 17:35:29.030554] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:38:28.526 [2024-11-26 17:35:29.030669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77139 ] 00:38:28.783 [2024-11-26 17:35:29.203357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.783 [2024-11-26 17:35:29.316261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.042 [2024-11-26 17:35:29.521681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:29.042 [2024-11-26 17:35:29.521772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.325 BaseBdev1_malloc 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.325 [2024-11-26 17:35:29.924218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:29.325 [2024-11-26 17:35:29.924286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:29.325 [2024-11-26 17:35:29.924327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:29.325 [2024-11-26 17:35:29.924349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:29.325 [2024-11-26 17:35:29.926557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:29.325 [2024-11-26 17:35:29.926597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:29.325 BaseBdev1 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.325 BaseBdev2_malloc 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.325 [2024-11-26 17:35:29.978195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:29.325 [2024-11-26 17:35:29.978270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:29.325 [2024-11-26 17:35:29.978295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:29.325 [2024-11-26 17:35:29.978308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:29.325 [2024-11-26 17:35:29.980788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:29.325 [2024-11-26 17:35:29.980834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:29.325 BaseBdev2 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.325 17:35:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.584 spare_malloc 00:38:29.584 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.584 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:29.584 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.584 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.584 spare_delay 00:38:29.584 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.584 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:29.584 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.584 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.585 [2024-11-26 17:35:30.058798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:29.585 [2024-11-26 17:35:30.058862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:29.585 [2024-11-26 17:35:30.058883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:38:29.585 [2024-11-26 17:35:30.058894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:29.585 [2024-11-26 17:35:30.060997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:29.585 [2024-11-26 17:35:30.061038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:29.585 spare 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.585 [2024-11-26 17:35:30.070830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:29.585 [2024-11-26 17:35:30.072619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:29.585 [2024-11-26 17:35:30.072813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:38:29.585 [2024-11-26 17:35:30.072831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:38:29.585 [2024-11-26 17:35:30.073110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:38:29.585 [2024-11-26 17:35:30.073295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:38:29.585 [2024-11-26 17:35:30.073304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:38:29.585 [2024-11-26 17:35:30.073473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:29.585 "name": "raid_bdev1", 00:38:29.585 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:29.585 "strip_size_kb": 0, 00:38:29.585 "state": "online", 00:38:29.585 "raid_level": "raid1", 00:38:29.585 "superblock": true, 00:38:29.585 "num_base_bdevs": 2, 00:38:29.585 "num_base_bdevs_discovered": 2, 00:38:29.585 "num_base_bdevs_operational": 2, 00:38:29.585 "base_bdevs_list": [ 00:38:29.585 { 00:38:29.585 "name": "BaseBdev1", 00:38:29.585 "uuid": "fde85181-3729-51ff-a647-bf34fa43df04", 00:38:29.585 "is_configured": true, 00:38:29.585 "data_offset": 2048, 00:38:29.585 "data_size": 63488 00:38:29.585 }, 00:38:29.585 { 00:38:29.585 "name": "BaseBdev2", 00:38:29.585 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:29.585 "is_configured": true, 00:38:29.585 "data_offset": 2048, 00:38:29.585 "data_size": 63488 00:38:29.585 } 00:38:29.585 ] 00:38:29.585 }' 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:29.585 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.845 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:29.845 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.845 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:29.845 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:38:29.845 [2024-11-26 17:35:30.506548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:29.845 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:30.105 [2024-11-26 17:35:30.606011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:30.105 "name": "raid_bdev1", 00:38:30.105 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:30.105 "strip_size_kb": 0, 00:38:30.105 "state": "online", 00:38:30.105 "raid_level": "raid1", 00:38:30.105 "superblock": true, 00:38:30.105 "num_base_bdevs": 2, 00:38:30.105 "num_base_bdevs_discovered": 1, 00:38:30.105 "num_base_bdevs_operational": 1, 00:38:30.105 "base_bdevs_list": [ 00:38:30.105 { 00:38:30.105 "name": null, 00:38:30.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:30.105 "is_configured": false, 00:38:30.105 "data_offset": 0, 00:38:30.105 "data_size": 63488 00:38:30.105 }, 00:38:30.105 { 00:38:30.105 "name": "BaseBdev2", 00:38:30.105 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:30.105 "is_configured": true, 00:38:30.105 "data_offset": 2048, 00:38:30.105 "data_size": 63488 00:38:30.105 } 00:38:30.105 ] 00:38:30.105 }' 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:30.105 17:35:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:30.106 [2024-11-26 17:35:30.720638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:38:30.106 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:30.106 Zero copy mechanism will not be used. 00:38:30.106 Running I/O for 60 seconds... 00:38:30.365 17:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:30.365 17:35:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.365 17:35:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:30.624 [2024-11-26 17:35:31.066031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:30.624 17:35:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.624 17:35:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:38:30.624 [2024-11-26 17:35:31.173245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:38:30.624 [2024-11-26 17:35:31.175896] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:30.624 [2024-11-26 17:35:31.285567] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:30.624 [2024-11-26 17:35:31.286746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:30.883 [2024-11-26 17:35:31.513986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:30.883 [2024-11-26 17:35:31.514696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:31.402 188.00 IOPS, 564.00 MiB/s [2024-11-26T17:35:32.097Z] [2024-11-26 17:35:31.858205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:38:31.402 [2024-11-26 17:35:31.859226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:38:31.402 [2024-11-26 17:35:32.092982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:31.663 "name": "raid_bdev1", 00:38:31.663 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:31.663 "strip_size_kb": 0, 00:38:31.663 "state": "online", 00:38:31.663 "raid_level": "raid1", 00:38:31.663 "superblock": true, 00:38:31.663 "num_base_bdevs": 2, 00:38:31.663 "num_base_bdevs_discovered": 2, 00:38:31.663 "num_base_bdevs_operational": 2, 00:38:31.663 "process": { 00:38:31.663 "type": "rebuild", 00:38:31.663 "target": "spare", 00:38:31.663 "progress": { 00:38:31.663 "blocks": 10240, 00:38:31.663 "percent": 16 00:38:31.663 } 00:38:31.663 }, 00:38:31.663 "base_bdevs_list": [ 00:38:31.663 { 00:38:31.663 "name": "spare", 00:38:31.663 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:31.663 "is_configured": true, 00:38:31.663 "data_offset": 2048, 00:38:31.663 "data_size": 63488 00:38:31.663 }, 00:38:31.663 { 00:38:31.663 "name": "BaseBdev2", 00:38:31.663 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:31.663 "is_configured": true, 00:38:31.663 "data_offset": 2048, 00:38:31.663 "data_size": 63488 00:38:31.663 } 00:38:31.663 ] 00:38:31.663 }' 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.663 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:31.663 [2024-11-26 17:35:32.295510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:31.663 [2024-11-26 17:35:32.317536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:38:31.922 [2024-11-26 17:35:32.426153] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:31.922 [2024-11-26 17:35:32.430488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:31.922 [2024-11-26 17:35:32.430645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:31.922 [2024-11-26 17:35:32.430688] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:31.922 [2024-11-26 17:35:32.471877] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:31.922 "name": "raid_bdev1", 00:38:31.922 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:31.922 "strip_size_kb": 0, 00:38:31.922 "state": "online", 00:38:31.922 "raid_level": "raid1", 00:38:31.922 "superblock": true, 00:38:31.922 "num_base_bdevs": 2, 00:38:31.922 "num_base_bdevs_discovered": 1, 00:38:31.922 "num_base_bdevs_operational": 1, 00:38:31.922 "base_bdevs_list": [ 00:38:31.922 { 00:38:31.922 "name": null, 00:38:31.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:31.922 "is_configured": false, 00:38:31.922 "data_offset": 0, 00:38:31.922 "data_size": 63488 00:38:31.922 }, 00:38:31.922 { 00:38:31.922 "name": "BaseBdev2", 00:38:31.922 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:31.922 "is_configured": true, 00:38:31.922 "data_offset": 2048, 00:38:31.922 "data_size": 63488 00:38:31.922 } 00:38:31.922 ] 00:38:31.922 }' 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:31.922 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:32.442 151.00 IOPS, 453.00 MiB/s [2024-11-26T17:35:33.137Z] 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:32.442 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:32.442 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:32.442 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:32.442 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:32.442 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:32.442 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.442 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:32.442 17:35:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.442 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.442 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:32.442 "name": "raid_bdev1", 00:38:32.442 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:32.442 "strip_size_kb": 0, 00:38:32.442 "state": "online", 00:38:32.442 "raid_level": "raid1", 00:38:32.442 "superblock": true, 00:38:32.442 "num_base_bdevs": 2, 00:38:32.442 "num_base_bdevs_discovered": 1, 00:38:32.442 "num_base_bdevs_operational": 1, 00:38:32.442 "base_bdevs_list": [ 00:38:32.442 { 00:38:32.442 "name": null, 00:38:32.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:32.442 "is_configured": false, 00:38:32.442 "data_offset": 0, 00:38:32.442 "data_size": 63488 00:38:32.442 }, 00:38:32.442 { 00:38:32.442 "name": "BaseBdev2", 00:38:32.442 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:32.442 "is_configured": true, 00:38:32.442 "data_offset": 2048, 00:38:32.442 "data_size": 63488 00:38:32.442 } 00:38:32.442 ] 00:38:32.442 }' 00:38:32.442 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:32.442 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:32.442 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:32.442 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:32.442 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:32.442 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.442 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:32.442 [2024-11-26 17:35:33.116929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:32.702 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.702 17:35:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:38:32.702 [2024-11-26 17:35:33.190701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:38:32.702 [2024-11-26 17:35:33.193443] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:32.702 [2024-11-26 17:35:33.338034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:38:32.962 [2024-11-26 17:35:33.574629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:32.962 [2024-11-26 17:35:33.575326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:38:33.245 160.67 IOPS, 482.00 MiB/s [2024-11-26T17:35:33.940Z] [2024-11-26 17:35:33.917269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:38:33.521 [2024-11-26 17:35:34.038565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:38:33.521 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:33.521 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:33.521 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:33.521 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:33.521 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:33.521 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:33.521 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:33.521 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:33.521 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:33.521 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:33.780 "name": "raid_bdev1", 00:38:33.780 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:33.780 "strip_size_kb": 0, 00:38:33.780 "state": "online", 00:38:33.780 "raid_level": "raid1", 00:38:33.780 "superblock": true, 00:38:33.780 "num_base_bdevs": 2, 00:38:33.780 "num_base_bdevs_discovered": 2, 00:38:33.780 "num_base_bdevs_operational": 2, 00:38:33.780 "process": { 00:38:33.780 "type": "rebuild", 00:38:33.780 "target": "spare", 00:38:33.780 "progress": { 00:38:33.780 "blocks": 10240, 00:38:33.780 "percent": 16 00:38:33.780 } 00:38:33.780 }, 00:38:33.780 "base_bdevs_list": [ 00:38:33.780 { 00:38:33.780 "name": "spare", 00:38:33.780 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:33.780 "is_configured": true, 00:38:33.780 "data_offset": 2048, 00:38:33.780 "data_size": 63488 00:38:33.780 }, 00:38:33.780 { 00:38:33.780 "name": "BaseBdev2", 00:38:33.780 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:33.780 "is_configured": true, 00:38:33.780 "data_offset": 2048, 00:38:33.780 "data_size": 63488 00:38:33.780 } 00:38:33.780 ] 00:38:33.780 }' 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:38:33.780 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=429 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:33.780 "name": "raid_bdev1", 00:38:33.780 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:33.780 "strip_size_kb": 0, 00:38:33.780 "state": "online", 00:38:33.780 "raid_level": "raid1", 00:38:33.780 "superblock": true, 00:38:33.780 "num_base_bdevs": 2, 00:38:33.780 "num_base_bdevs_discovered": 2, 00:38:33.780 "num_base_bdevs_operational": 2, 00:38:33.780 "process": { 00:38:33.780 "type": "rebuild", 00:38:33.780 "target": "spare", 00:38:33.780 "progress": { 00:38:33.780 "blocks": 12288, 00:38:33.780 "percent": 19 00:38:33.780 } 00:38:33.780 }, 00:38:33.780 "base_bdevs_list": [ 00:38:33.780 { 00:38:33.780 "name": "spare", 00:38:33.780 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:33.780 "is_configured": true, 00:38:33.780 "data_offset": 2048, 00:38:33.780 "data_size": 63488 00:38:33.780 }, 00:38:33.780 { 00:38:33.780 "name": "BaseBdev2", 00:38:33.780 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:33.780 "is_configured": true, 00:38:33.780 "data_offset": 2048, 00:38:33.780 "data_size": 63488 00:38:33.780 } 00:38:33.780 ] 00:38:33.780 }' 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:33.780 [2024-11-26 17:35:34.411978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:33.780 17:35:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:34.038 [2024-11-26 17:35:34.543771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:38:34.606 143.75 IOPS, 431.25 MiB/s [2024-11-26T17:35:35.301Z] [2024-11-26 17:35:35.192932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:38:34.865 [2024-11-26 17:35:35.402918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:38:34.865 [2024-11-26 17:35:35.403292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.865 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:34.865 "name": "raid_bdev1", 00:38:34.865 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:34.865 "strip_size_kb": 0, 00:38:34.865 "state": "online", 00:38:34.866 "raid_level": "raid1", 00:38:34.866 "superblock": true, 00:38:34.866 "num_base_bdevs": 2, 00:38:34.866 "num_base_bdevs_discovered": 2, 00:38:34.866 "num_base_bdevs_operational": 2, 00:38:34.866 "process": { 00:38:34.866 "type": "rebuild", 00:38:34.866 "target": "spare", 00:38:34.866 "progress": { 00:38:34.866 "blocks": 28672, 00:38:34.866 "percent": 45 00:38:34.866 } 00:38:34.866 }, 00:38:34.866 "base_bdevs_list": [ 00:38:34.866 { 00:38:34.866 "name": "spare", 00:38:34.866 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:34.866 "is_configured": true, 00:38:34.866 "data_offset": 2048, 00:38:34.866 "data_size": 63488 00:38:34.866 }, 00:38:34.866 { 00:38:34.866 "name": "BaseBdev2", 00:38:34.866 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:34.866 "is_configured": true, 00:38:34.866 "data_offset": 2048, 00:38:34.866 "data_size": 63488 00:38:34.866 } 00:38:34.866 ] 00:38:34.866 }' 00:38:34.866 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:34.866 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:34.866 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:35.124 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:35.124 17:35:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:35.691 127.80 IOPS, 383.40 MiB/s [2024-11-26T17:35:36.386Z] [2024-11-26 17:35:36.180576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:38:35.950 [2024-11-26 17:35:36.413258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:38:35.950 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:35.950 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:35.950 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:35.950 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:35.950 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:35.950 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:35.950 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:35.950 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.950 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.950 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:35.950 [2024-11-26 17:35:36.630899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:38:36.209 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.209 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:36.209 "name": "raid_bdev1", 00:38:36.209 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:36.209 "strip_size_kb": 0, 00:38:36.209 "state": "online", 00:38:36.209 "raid_level": "raid1", 00:38:36.209 "superblock": true, 00:38:36.209 "num_base_bdevs": 2, 00:38:36.209 "num_base_bdevs_discovered": 2, 00:38:36.209 "num_base_bdevs_operational": 2, 00:38:36.209 "process": { 00:38:36.209 "type": "rebuild", 00:38:36.209 "target": "spare", 00:38:36.209 "progress": { 00:38:36.209 "blocks": 45056, 00:38:36.209 "percent": 70 00:38:36.209 } 00:38:36.209 }, 00:38:36.209 "base_bdevs_list": [ 00:38:36.209 { 00:38:36.209 "name": "spare", 00:38:36.209 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:36.209 "is_configured": true, 00:38:36.209 "data_offset": 2048, 00:38:36.209 "data_size": 63488 00:38:36.209 }, 00:38:36.209 { 00:38:36.209 "name": "BaseBdev2", 00:38:36.209 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:36.209 "is_configured": true, 00:38:36.209 "data_offset": 2048, 00:38:36.209 "data_size": 63488 00:38:36.209 } 00:38:36.209 ] 00:38:36.209 }' 00:38:36.209 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:36.209 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:36.209 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:36.209 115.83 IOPS, 347.50 MiB/s [2024-11-26T17:35:36.904Z] 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:36.209 17:35:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:37.144 [2024-11-26 17:35:37.503907] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:37.144 [2024-11-26 17:35:37.603712] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:37.145 [2024-11-26 17:35:37.606848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:37.145 104.14 IOPS, 312.43 MiB/s [2024-11-26T17:35:37.840Z] 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:37.145 "name": "raid_bdev1", 00:38:37.145 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:37.145 "strip_size_kb": 0, 00:38:37.145 "state": "online", 00:38:37.145 "raid_level": "raid1", 00:38:37.145 "superblock": true, 00:38:37.145 "num_base_bdevs": 2, 00:38:37.145 "num_base_bdevs_discovered": 2, 00:38:37.145 "num_base_bdevs_operational": 2, 00:38:37.145 "base_bdevs_list": [ 00:38:37.145 { 00:38:37.145 "name": "spare", 00:38:37.145 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:37.145 "is_configured": true, 00:38:37.145 "data_offset": 2048, 00:38:37.145 "data_size": 63488 00:38:37.145 }, 00:38:37.145 { 00:38:37.145 "name": "BaseBdev2", 00:38:37.145 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:37.145 "is_configured": true, 00:38:37.145 "data_offset": 2048, 00:38:37.145 "data_size": 63488 00:38:37.145 } 00:38:37.145 ] 00:38:37.145 }' 00:38:37.145 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:37.404 "name": "raid_bdev1", 00:38:37.404 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:37.404 "strip_size_kb": 0, 00:38:37.404 "state": "online", 00:38:37.404 "raid_level": "raid1", 00:38:37.404 "superblock": true, 00:38:37.404 "num_base_bdevs": 2, 00:38:37.404 "num_base_bdevs_discovered": 2, 00:38:37.404 "num_base_bdevs_operational": 2, 00:38:37.404 "base_bdevs_list": [ 00:38:37.404 { 00:38:37.404 "name": "spare", 00:38:37.404 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:37.404 "is_configured": true, 00:38:37.404 "data_offset": 2048, 00:38:37.404 "data_size": 63488 00:38:37.404 }, 00:38:37.404 { 00:38:37.404 "name": "BaseBdev2", 00:38:37.404 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:37.404 "is_configured": true, 00:38:37.404 "data_offset": 2048, 00:38:37.404 "data_size": 63488 00:38:37.404 } 00:38:37.404 ] 00:38:37.404 }' 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:37.404 17:35:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.404 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:37.404 "name": "raid_bdev1", 00:38:37.404 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:37.404 "strip_size_kb": 0, 00:38:37.404 "state": "online", 00:38:37.404 "raid_level": "raid1", 00:38:37.404 "superblock": true, 00:38:37.404 "num_base_bdevs": 2, 00:38:37.404 "num_base_bdevs_discovered": 2, 00:38:37.404 "num_base_bdevs_operational": 2, 00:38:37.404 "base_bdevs_list": [ 00:38:37.404 { 00:38:37.404 "name": "spare", 00:38:37.404 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:37.404 "is_configured": true, 00:38:37.404 "data_offset": 2048, 00:38:37.404 "data_size": 63488 00:38:37.404 }, 00:38:37.404 { 00:38:37.404 "name": "BaseBdev2", 00:38:37.404 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:37.404 "is_configured": true, 00:38:37.404 "data_offset": 2048, 00:38:37.404 "data_size": 63488 00:38:37.404 } 00:38:37.405 ] 00:38:37.405 }' 00:38:37.405 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:37.405 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:37.972 [2024-11-26 17:35:38.478109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:37.972 [2024-11-26 17:35:38.478214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:37.972 00:38:37.972 Latency(us) 00:38:37.972 [2024-11-26T17:35:38.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.972 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:38:37.972 raid_bdev1 : 7.87 97.21 291.63 0.00 0.00 13883.01 339.84 110810.21 00:38:37.972 [2024-11-26T17:35:38.667Z] =================================================================================================================== 00:38:37.972 [2024-11-26T17:35:38.667Z] Total : 97.21 291.63 0.00 0.00 13883.01 339.84 110810.21 00:38:37.972 [2024-11-26 17:35:38.604561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:37.972 [2024-11-26 17:35:38.604731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:37.972 [2024-11-26 17:35:38.604843] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:37.972 [2024-11-26 17:35:38.604901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:38:37.972 { 00:38:37.972 "results": [ 00:38:37.972 { 00:38:37.972 "job": "raid_bdev1", 00:38:37.972 "core_mask": "0x1", 00:38:37.972 "workload": "randrw", 00:38:37.972 "percentage": 50, 00:38:37.972 "status": "finished", 00:38:37.972 "queue_depth": 2, 00:38:37.972 "io_size": 3145728, 00:38:37.972 "runtime": 7.869602, 00:38:37.972 "iops": 97.20949039100071, 00:38:37.972 "mibps": 291.62847117300214, 00:38:37.972 "io_failed": 0, 00:38:37.972 "io_timeout": 0, 00:38:37.972 "avg_latency_us": 13883.006038188201, 00:38:37.972 "min_latency_us": 339.8427947598253, 00:38:37.972 "max_latency_us": 110810.21484716157 00:38:37.972 } 00:38:37.972 ], 00:38:37.972 "core_count": 1 00:38:37.972 } 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:37.972 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:38:38.231 /dev/nbd0 00:38:38.231 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:38.231 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:38.498 1+0 records in 00:38:38.498 1+0 records out 00:38:38.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414384 s, 9.9 MB/s 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:38.498 17:35:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:38:38.767 /dev/nbd1 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:38.767 1+0 records in 00:38:38.767 1+0 records out 00:38:38.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495637 s, 8.3 MB/s 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:38.767 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:39.026 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:39.284 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:39.284 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:39.285 [2024-11-26 17:35:39.934422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:39.285 [2024-11-26 17:35:39.934489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:39.285 [2024-11-26 17:35:39.934535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:38:39.285 [2024-11-26 17:35:39.934549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:39.285 [2024-11-26 17:35:39.937132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:39.285 [2024-11-26 17:35:39.937179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:39.285 [2024-11-26 17:35:39.937278] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:39.285 [2024-11-26 17:35:39.937346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:39.285 [2024-11-26 17:35:39.937507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:39.285 spare 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.285 17:35:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:39.544 [2024-11-26 17:35:40.037468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:38:39.544 [2024-11-26 17:35:40.037537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:38:39.545 [2024-11-26 17:35:40.037933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:38:39.545 [2024-11-26 17:35:40.038171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:38:39.545 [2024-11-26 17:35:40.038197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:38:39.545 [2024-11-26 17:35:40.038434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:39.545 "name": "raid_bdev1", 00:38:39.545 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:39.545 "strip_size_kb": 0, 00:38:39.545 "state": "online", 00:38:39.545 "raid_level": "raid1", 00:38:39.545 "superblock": true, 00:38:39.545 "num_base_bdevs": 2, 00:38:39.545 "num_base_bdevs_discovered": 2, 00:38:39.545 "num_base_bdevs_operational": 2, 00:38:39.545 "base_bdevs_list": [ 00:38:39.545 { 00:38:39.545 "name": "spare", 00:38:39.545 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:39.545 "is_configured": true, 00:38:39.545 "data_offset": 2048, 00:38:39.545 "data_size": 63488 00:38:39.545 }, 00:38:39.545 { 00:38:39.545 "name": "BaseBdev2", 00:38:39.545 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:39.545 "is_configured": true, 00:38:39.545 "data_offset": 2048, 00:38:39.545 "data_size": 63488 00:38:39.545 } 00:38:39.545 ] 00:38:39.545 }' 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:39.545 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:40.114 "name": "raid_bdev1", 00:38:40.114 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:40.114 "strip_size_kb": 0, 00:38:40.114 "state": "online", 00:38:40.114 "raid_level": "raid1", 00:38:40.114 "superblock": true, 00:38:40.114 "num_base_bdevs": 2, 00:38:40.114 "num_base_bdevs_discovered": 2, 00:38:40.114 "num_base_bdevs_operational": 2, 00:38:40.114 "base_bdevs_list": [ 00:38:40.114 { 00:38:40.114 "name": "spare", 00:38:40.114 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:40.114 "is_configured": true, 00:38:40.114 "data_offset": 2048, 00:38:40.114 "data_size": 63488 00:38:40.114 }, 00:38:40.114 { 00:38:40.114 "name": "BaseBdev2", 00:38:40.114 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:40.114 "is_configured": true, 00:38:40.114 "data_offset": 2048, 00:38:40.114 "data_size": 63488 00:38:40.114 } 00:38:40.114 ] 00:38:40.114 }' 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:40.114 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:40.115 [2024-11-26 17:35:40.721290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:40.115 "name": "raid_bdev1", 00:38:40.115 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:40.115 "strip_size_kb": 0, 00:38:40.115 "state": "online", 00:38:40.115 "raid_level": "raid1", 00:38:40.115 "superblock": true, 00:38:40.115 "num_base_bdevs": 2, 00:38:40.115 "num_base_bdevs_discovered": 1, 00:38:40.115 "num_base_bdevs_operational": 1, 00:38:40.115 "base_bdevs_list": [ 00:38:40.115 { 00:38:40.115 "name": null, 00:38:40.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:40.115 "is_configured": false, 00:38:40.115 "data_offset": 0, 00:38:40.115 "data_size": 63488 00:38:40.115 }, 00:38:40.115 { 00:38:40.115 "name": "BaseBdev2", 00:38:40.115 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:40.115 "is_configured": true, 00:38:40.115 "data_offset": 2048, 00:38:40.115 "data_size": 63488 00:38:40.115 } 00:38:40.115 ] 00:38:40.115 }' 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:40.115 17:35:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:40.683 17:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:40.683 17:35:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.683 17:35:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:40.683 [2024-11-26 17:35:41.176633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:40.683 [2024-11-26 17:35:41.176851] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:40.683 [2024-11-26 17:35:41.176869] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:40.683 [2024-11-26 17:35:41.176911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:40.683 [2024-11-26 17:35:41.194392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:38:40.683 17:35:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.683 17:35:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:38:40.683 [2024-11-26 17:35:41.196527] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:41.619 "name": "raid_bdev1", 00:38:41.619 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:41.619 "strip_size_kb": 0, 00:38:41.619 "state": "online", 00:38:41.619 "raid_level": "raid1", 00:38:41.619 "superblock": true, 00:38:41.619 "num_base_bdevs": 2, 00:38:41.619 "num_base_bdevs_discovered": 2, 00:38:41.619 "num_base_bdevs_operational": 2, 00:38:41.619 "process": { 00:38:41.619 "type": "rebuild", 00:38:41.619 "target": "spare", 00:38:41.619 "progress": { 00:38:41.619 "blocks": 20480, 00:38:41.619 "percent": 32 00:38:41.619 } 00:38:41.619 }, 00:38:41.619 "base_bdevs_list": [ 00:38:41.619 { 00:38:41.619 "name": "spare", 00:38:41.619 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:41.619 "is_configured": true, 00:38:41.619 "data_offset": 2048, 00:38:41.619 "data_size": 63488 00:38:41.619 }, 00:38:41.619 { 00:38:41.619 "name": "BaseBdev2", 00:38:41.619 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:41.619 "is_configured": true, 00:38:41.619 "data_offset": 2048, 00:38:41.619 "data_size": 63488 00:38:41.619 } 00:38:41.619 ] 00:38:41.619 }' 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:41.619 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:41.878 [2024-11-26 17:35:42.344613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:41.878 [2024-11-26 17:35:42.402388] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:41.878 [2024-11-26 17:35:42.402457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:41.878 [2024-11-26 17:35:42.402491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:41.878 [2024-11-26 17:35:42.402499] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:41.878 "name": "raid_bdev1", 00:38:41.878 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:41.878 "strip_size_kb": 0, 00:38:41.878 "state": "online", 00:38:41.878 "raid_level": "raid1", 00:38:41.878 "superblock": true, 00:38:41.878 "num_base_bdevs": 2, 00:38:41.878 "num_base_bdevs_discovered": 1, 00:38:41.878 "num_base_bdevs_operational": 1, 00:38:41.878 "base_bdevs_list": [ 00:38:41.878 { 00:38:41.878 "name": null, 00:38:41.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:41.878 "is_configured": false, 00:38:41.878 "data_offset": 0, 00:38:41.878 "data_size": 63488 00:38:41.878 }, 00:38:41.878 { 00:38:41.878 "name": "BaseBdev2", 00:38:41.878 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:41.878 "is_configured": true, 00:38:41.878 "data_offset": 2048, 00:38:41.878 "data_size": 63488 00:38:41.878 } 00:38:41.878 ] 00:38:41.878 }' 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:41.878 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:42.447 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:42.447 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:42.447 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:42.447 [2024-11-26 17:35:42.890382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:42.447 [2024-11-26 17:35:42.890470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:42.447 [2024-11-26 17:35:42.890494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:38:42.447 [2024-11-26 17:35:42.890503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:42.447 [2024-11-26 17:35:42.891015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:42.447 [2024-11-26 17:35:42.891043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:42.447 [2024-11-26 17:35:42.891141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:42.447 [2024-11-26 17:35:42.891157] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:42.447 [2024-11-26 17:35:42.891170] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:42.447 [2024-11-26 17:35:42.891190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:42.447 [2024-11-26 17:35:42.908180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:38:42.447 spare 00:38:42.447 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:42.447 [2024-11-26 17:35:42.910027] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:42.447 17:35:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:43.384 "name": "raid_bdev1", 00:38:43.384 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:43.384 "strip_size_kb": 0, 00:38:43.384 "state": "online", 00:38:43.384 "raid_level": "raid1", 00:38:43.384 "superblock": true, 00:38:43.384 "num_base_bdevs": 2, 00:38:43.384 "num_base_bdevs_discovered": 2, 00:38:43.384 "num_base_bdevs_operational": 2, 00:38:43.384 "process": { 00:38:43.384 "type": "rebuild", 00:38:43.384 "target": "spare", 00:38:43.384 "progress": { 00:38:43.384 "blocks": 20480, 00:38:43.384 "percent": 32 00:38:43.384 } 00:38:43.384 }, 00:38:43.384 "base_bdevs_list": [ 00:38:43.384 { 00:38:43.384 "name": "spare", 00:38:43.384 "uuid": "c753cf2b-d84c-58b4-bd98-5ca8c771cfef", 00:38:43.384 "is_configured": true, 00:38:43.384 "data_offset": 2048, 00:38:43.384 "data_size": 63488 00:38:43.384 }, 00:38:43.384 { 00:38:43.384 "name": "BaseBdev2", 00:38:43.384 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:43.384 "is_configured": true, 00:38:43.384 "data_offset": 2048, 00:38:43.384 "data_size": 63488 00:38:43.384 } 00:38:43.384 ] 00:38:43.384 }' 00:38:43.384 17:35:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:43.384 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:43.384 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:43.384 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:43.384 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:38:43.384 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.384 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:43.384 [2024-11-26 17:35:44.049595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:43.648 [2024-11-26 17:35:44.115747] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:43.648 [2024-11-26 17:35:44.115830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:43.648 [2024-11-26 17:35:44.115845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:43.648 [2024-11-26 17:35:44.115854] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:43.648 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.649 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:43.649 "name": "raid_bdev1", 00:38:43.649 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:43.649 "strip_size_kb": 0, 00:38:43.649 "state": "online", 00:38:43.649 "raid_level": "raid1", 00:38:43.649 "superblock": true, 00:38:43.649 "num_base_bdevs": 2, 00:38:43.649 "num_base_bdevs_discovered": 1, 00:38:43.649 "num_base_bdevs_operational": 1, 00:38:43.649 "base_bdevs_list": [ 00:38:43.649 { 00:38:43.649 "name": null, 00:38:43.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.649 "is_configured": false, 00:38:43.649 "data_offset": 0, 00:38:43.649 "data_size": 63488 00:38:43.649 }, 00:38:43.649 { 00:38:43.649 "name": "BaseBdev2", 00:38:43.649 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:43.649 "is_configured": true, 00:38:43.649 "data_offset": 2048, 00:38:43.649 "data_size": 63488 00:38:43.649 } 00:38:43.649 ] 00:38:43.649 }' 00:38:43.649 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:43.649 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:44.215 "name": "raid_bdev1", 00:38:44.215 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:44.215 "strip_size_kb": 0, 00:38:44.215 "state": "online", 00:38:44.215 "raid_level": "raid1", 00:38:44.215 "superblock": true, 00:38:44.215 "num_base_bdevs": 2, 00:38:44.215 "num_base_bdevs_discovered": 1, 00:38:44.215 "num_base_bdevs_operational": 1, 00:38:44.215 "base_bdevs_list": [ 00:38:44.215 { 00:38:44.215 "name": null, 00:38:44.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.215 "is_configured": false, 00:38:44.215 "data_offset": 0, 00:38:44.215 "data_size": 63488 00:38:44.215 }, 00:38:44.215 { 00:38:44.215 "name": "BaseBdev2", 00:38:44.215 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:44.215 "is_configured": true, 00:38:44.215 "data_offset": 2048, 00:38:44.215 "data_size": 63488 00:38:44.215 } 00:38:44.215 ] 00:38:44.215 }' 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:44.215 [2024-11-26 17:35:44.770199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:44.215 [2024-11-26 17:35:44.770262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:44.215 [2024-11-26 17:35:44.770289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:38:44.215 [2024-11-26 17:35:44.770306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:44.215 [2024-11-26 17:35:44.770816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:44.215 [2024-11-26 17:35:44.770849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:44.215 [2024-11-26 17:35:44.770944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:44.215 [2024-11-26 17:35:44.770966] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:44.215 [2024-11-26 17:35:44.770974] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:44.215 [2024-11-26 17:35:44.770986] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:38:44.215 BaseBdev1 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.215 17:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:45.152 "name": "raid_bdev1", 00:38:45.152 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:45.152 "strip_size_kb": 0, 00:38:45.152 "state": "online", 00:38:45.152 "raid_level": "raid1", 00:38:45.152 "superblock": true, 00:38:45.152 "num_base_bdevs": 2, 00:38:45.152 "num_base_bdevs_discovered": 1, 00:38:45.152 "num_base_bdevs_operational": 1, 00:38:45.152 "base_bdevs_list": [ 00:38:45.152 { 00:38:45.152 "name": null, 00:38:45.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:45.152 "is_configured": false, 00:38:45.152 "data_offset": 0, 00:38:45.152 "data_size": 63488 00:38:45.152 }, 00:38:45.152 { 00:38:45.152 "name": "BaseBdev2", 00:38:45.152 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:45.152 "is_configured": true, 00:38:45.152 "data_offset": 2048, 00:38:45.152 "data_size": 63488 00:38:45.152 } 00:38:45.152 ] 00:38:45.152 }' 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:45.152 17:35:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:45.721 "name": "raid_bdev1", 00:38:45.721 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:45.721 "strip_size_kb": 0, 00:38:45.721 "state": "online", 00:38:45.721 "raid_level": "raid1", 00:38:45.721 "superblock": true, 00:38:45.721 "num_base_bdevs": 2, 00:38:45.721 "num_base_bdevs_discovered": 1, 00:38:45.721 "num_base_bdevs_operational": 1, 00:38:45.721 "base_bdevs_list": [ 00:38:45.721 { 00:38:45.721 "name": null, 00:38:45.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:45.721 "is_configured": false, 00:38:45.721 "data_offset": 0, 00:38:45.721 "data_size": 63488 00:38:45.721 }, 00:38:45.721 { 00:38:45.721 "name": "BaseBdev2", 00:38:45.721 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:45.721 "is_configured": true, 00:38:45.721 "data_offset": 2048, 00:38:45.721 "data_size": 63488 00:38:45.721 } 00:38:45.721 ] 00:38:45.721 }' 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:45.721 [2024-11-26 17:35:46.379810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:45.721 [2024-11-26 17:35:46.379986] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:45.721 [2024-11-26 17:35:46.380005] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:45.721 request: 00:38:45.721 { 00:38:45.721 "base_bdev": "BaseBdev1", 00:38:45.721 "raid_bdev": "raid_bdev1", 00:38:45.721 "method": "bdev_raid_add_base_bdev", 00:38:45.721 "req_id": 1 00:38:45.721 } 00:38:45.721 Got JSON-RPC error response 00:38:45.721 response: 00:38:45.721 { 00:38:45.721 "code": -22, 00:38:45.721 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:45.721 } 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:45.721 17:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:47.095 "name": "raid_bdev1", 00:38:47.095 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:47.095 "strip_size_kb": 0, 00:38:47.095 "state": "online", 00:38:47.095 "raid_level": "raid1", 00:38:47.095 "superblock": true, 00:38:47.095 "num_base_bdevs": 2, 00:38:47.095 "num_base_bdevs_discovered": 1, 00:38:47.095 "num_base_bdevs_operational": 1, 00:38:47.095 "base_bdevs_list": [ 00:38:47.095 { 00:38:47.095 "name": null, 00:38:47.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.095 "is_configured": false, 00:38:47.095 "data_offset": 0, 00:38:47.095 "data_size": 63488 00:38:47.095 }, 00:38:47.095 { 00:38:47.095 "name": "BaseBdev2", 00:38:47.095 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:47.095 "is_configured": true, 00:38:47.095 "data_offset": 2048, 00:38:47.095 "data_size": 63488 00:38:47.095 } 00:38:47.095 ] 00:38:47.095 }' 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:47.095 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:47.355 "name": "raid_bdev1", 00:38:47.355 "uuid": "d2f0b20f-8e8d-439a-9167-877c920c6de5", 00:38:47.355 "strip_size_kb": 0, 00:38:47.355 "state": "online", 00:38:47.355 "raid_level": "raid1", 00:38:47.355 "superblock": true, 00:38:47.355 "num_base_bdevs": 2, 00:38:47.355 "num_base_bdevs_discovered": 1, 00:38:47.355 "num_base_bdevs_operational": 1, 00:38:47.355 "base_bdevs_list": [ 00:38:47.355 { 00:38:47.355 "name": null, 00:38:47.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.355 "is_configured": false, 00:38:47.355 "data_offset": 0, 00:38:47.355 "data_size": 63488 00:38:47.355 }, 00:38:47.355 { 00:38:47.355 "name": "BaseBdev2", 00:38:47.355 "uuid": "f7ad4084-b379-51f6-944b-72f33e85aac2", 00:38:47.355 "is_configured": true, 00:38:47.355 "data_offset": 2048, 00:38:47.355 "data_size": 63488 00:38:47.355 } 00:38:47.355 ] 00:38:47.355 }' 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:47.355 17:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77139 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77139 ']' 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77139 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77139 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77139' 00:38:47.355 killing process with pid 77139 00:38:47.355 Received shutdown signal, test time was about 17.342763 seconds 00:38:47.355 00:38:47.355 Latency(us) 00:38:47.355 [2024-11-26T17:35:48.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:47.355 [2024-11-26T17:35:48.050Z] =================================================================================================================== 00:38:47.355 [2024-11-26T17:35:48.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77139 00:38:47.355 [2024-11-26 17:35:48.032779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:47.355 [2024-11-26 17:35:48.032915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:47.355 17:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77139 00:38:47.355 [2024-11-26 17:35:48.032974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:47.355 [2024-11-26 17:35:48.032985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:38:47.614 [2024-11-26 17:35:48.279784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:48.994 17:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:38:48.995 00:38:48.995 real 0m20.603s 00:38:48.995 user 0m26.990s 00:38:48.995 sys 0m2.192s 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:48.995 ************************************ 00:38:48.995 END TEST raid_rebuild_test_sb_io 00:38:48.995 ************************************ 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:38:48.995 17:35:49 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:38:48.995 17:35:49 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:38:48.995 17:35:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:38:48.995 17:35:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.995 17:35:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:48.995 ************************************ 00:38:48.995 START TEST raid_rebuild_test 00:38:48.995 ************************************ 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77833 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77833 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77833 ']' 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.995 17:35:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:49.254 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:49.254 Zero copy mechanism will not be used. 00:38:49.254 [2024-11-26 17:35:49.702247] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:38:49.254 [2024-11-26 17:35:49.702383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77833 ] 00:38:49.254 [2024-11-26 17:35:49.883361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.513 [2024-11-26 17:35:50.009457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.771 [2024-11-26 17:35:50.215275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:49.771 [2024-11-26 17:35:50.215344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.029 BaseBdev1_malloc 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.029 [2024-11-26 17:35:50.635952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:50.029 [2024-11-26 17:35:50.636014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:50.029 [2024-11-26 17:35:50.636039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:50.029 [2024-11-26 17:35:50.636052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:50.029 [2024-11-26 17:35:50.638374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:50.029 [2024-11-26 17:35:50.638417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:50.029 BaseBdev1 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.029 BaseBdev2_malloc 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.029 [2024-11-26 17:35:50.695319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:50.029 [2024-11-26 17:35:50.695385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:50.029 [2024-11-26 17:35:50.695429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:50.029 [2024-11-26 17:35:50.695442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:50.029 [2024-11-26 17:35:50.697755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:50.029 [2024-11-26 17:35:50.697795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:50.029 BaseBdev2 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.029 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.287 BaseBdev3_malloc 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.287 [2024-11-26 17:35:50.764728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:38:50.287 [2024-11-26 17:35:50.764797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:50.287 [2024-11-26 17:35:50.764823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:50.287 [2024-11-26 17:35:50.764836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:50.287 [2024-11-26 17:35:50.767073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:50.287 [2024-11-26 17:35:50.767115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:38:50.287 BaseBdev3 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.287 BaseBdev4_malloc 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.287 [2024-11-26 17:35:50.819291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:38:50.287 [2024-11-26 17:35:50.819373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:50.287 [2024-11-26 17:35:50.819395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:38:50.287 [2024-11-26 17:35:50.819406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:50.287 [2024-11-26 17:35:50.821751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:50.287 [2024-11-26 17:35:50.821795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:38:50.287 BaseBdev4 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.287 spare_malloc 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.287 spare_delay 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.287 [2024-11-26 17:35:50.881171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:50.287 [2024-11-26 17:35:50.881234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:50.287 [2024-11-26 17:35:50.881255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:38:50.287 [2024-11-26 17:35:50.881267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:50.287 [2024-11-26 17:35:50.883464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:50.287 [2024-11-26 17:35:50.883503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:50.287 spare 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.287 [2024-11-26 17:35:50.893187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:50.287 [2024-11-26 17:35:50.895161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:50.287 [2024-11-26 17:35:50.895234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:50.287 [2024-11-26 17:35:50.895292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:50.287 [2024-11-26 17:35:50.895379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:38:50.287 [2024-11-26 17:35:50.895393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:38:50.287 [2024-11-26 17:35:50.895691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:38:50.287 [2024-11-26 17:35:50.895892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:38:50.287 [2024-11-26 17:35:50.895915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:38:50.287 [2024-11-26 17:35:50.896092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:50.287 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:50.288 "name": "raid_bdev1", 00:38:50.288 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:38:50.288 "strip_size_kb": 0, 00:38:50.288 "state": "online", 00:38:50.288 "raid_level": "raid1", 00:38:50.288 "superblock": false, 00:38:50.288 "num_base_bdevs": 4, 00:38:50.288 "num_base_bdevs_discovered": 4, 00:38:50.288 "num_base_bdevs_operational": 4, 00:38:50.288 "base_bdevs_list": [ 00:38:50.288 { 00:38:50.288 "name": "BaseBdev1", 00:38:50.288 "uuid": "86f6e145-c6ca-5e43-a3a7-59181c27d2f1", 00:38:50.288 "is_configured": true, 00:38:50.288 "data_offset": 0, 00:38:50.288 "data_size": 65536 00:38:50.288 }, 00:38:50.288 { 00:38:50.288 "name": "BaseBdev2", 00:38:50.288 "uuid": "9aa4ec42-6a63-5e6c-a7fb-628f074781d2", 00:38:50.288 "is_configured": true, 00:38:50.288 "data_offset": 0, 00:38:50.288 "data_size": 65536 00:38:50.288 }, 00:38:50.288 { 00:38:50.288 "name": "BaseBdev3", 00:38:50.288 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:38:50.288 "is_configured": true, 00:38:50.288 "data_offset": 0, 00:38:50.288 "data_size": 65536 00:38:50.288 }, 00:38:50.288 { 00:38:50.288 "name": "BaseBdev4", 00:38:50.288 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:38:50.288 "is_configured": true, 00:38:50.288 "data_offset": 0, 00:38:50.288 "data_size": 65536 00:38:50.288 } 00:38:50.288 ] 00:38:50.288 }' 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:50.288 17:35:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.856 [2024-11-26 17:35:51.348951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:50.856 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:51.114 [2024-11-26 17:35:51.632615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:38:51.114 /dev/nbd0 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:51.114 1+0 records in 00:38:51.114 1+0 records out 00:38:51.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238197 s, 17.2 MB/s 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:51.114 17:35:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:38:51.115 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:51.115 17:35:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:51.115 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:38:51.115 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:38:51.115 17:35:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:38:59.231 65536+0 records in 00:38:59.231 65536+0 records out 00:38:59.231 33554432 bytes (34 MB, 32 MiB) copied, 6.8719 s, 4.9 MB/s 00:38:59.231 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:38:59.231 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:59.231 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:59.231 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:59.231 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:38:59.231 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:59.232 [2024-11-26 17:35:58.791563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:59.232 [2024-11-26 17:35:58.807905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:59.232 "name": "raid_bdev1", 00:38:59.232 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:38:59.232 "strip_size_kb": 0, 00:38:59.232 "state": "online", 00:38:59.232 "raid_level": "raid1", 00:38:59.232 "superblock": false, 00:38:59.232 "num_base_bdevs": 4, 00:38:59.232 "num_base_bdevs_discovered": 3, 00:38:59.232 "num_base_bdevs_operational": 3, 00:38:59.232 "base_bdevs_list": [ 00:38:59.232 { 00:38:59.232 "name": null, 00:38:59.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:59.232 "is_configured": false, 00:38:59.232 "data_offset": 0, 00:38:59.232 "data_size": 65536 00:38:59.232 }, 00:38:59.232 { 00:38:59.232 "name": "BaseBdev2", 00:38:59.232 "uuid": "9aa4ec42-6a63-5e6c-a7fb-628f074781d2", 00:38:59.232 "is_configured": true, 00:38:59.232 "data_offset": 0, 00:38:59.232 "data_size": 65536 00:38:59.232 }, 00:38:59.232 { 00:38:59.232 "name": "BaseBdev3", 00:38:59.232 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:38:59.232 "is_configured": true, 00:38:59.232 "data_offset": 0, 00:38:59.232 "data_size": 65536 00:38:59.232 }, 00:38:59.232 { 00:38:59.232 "name": "BaseBdev4", 00:38:59.232 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:38:59.232 "is_configured": true, 00:38:59.232 "data_offset": 0, 00:38:59.232 "data_size": 65536 00:38:59.232 } 00:38:59.232 ] 00:38:59.232 }' 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:59.232 17:35:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:59.232 17:35:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:59.232 17:35:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.232 17:35:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:59.232 [2024-11-26 17:35:59.263197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:59.232 [2024-11-26 17:35:59.281157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:38:59.232 17:35:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.232 17:35:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:38:59.232 [2024-11-26 17:35:59.283287] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:59.800 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:59.800 "name": "raid_bdev1", 00:38:59.800 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:38:59.800 "strip_size_kb": 0, 00:38:59.800 "state": "online", 00:38:59.800 "raid_level": "raid1", 00:38:59.800 "superblock": false, 00:38:59.800 "num_base_bdevs": 4, 00:38:59.800 "num_base_bdevs_discovered": 4, 00:38:59.800 "num_base_bdevs_operational": 4, 00:38:59.800 "process": { 00:38:59.800 "type": "rebuild", 00:38:59.800 "target": "spare", 00:38:59.800 "progress": { 00:38:59.800 "blocks": 20480, 00:38:59.800 "percent": 31 00:38:59.800 } 00:38:59.800 }, 00:38:59.800 "base_bdevs_list": [ 00:38:59.800 { 00:38:59.800 "name": "spare", 00:38:59.800 "uuid": "0cf55e76-4625-58c7-8529-8d3e9fa332b5", 00:38:59.800 "is_configured": true, 00:38:59.800 "data_offset": 0, 00:38:59.800 "data_size": 65536 00:38:59.800 }, 00:38:59.800 { 00:38:59.800 "name": "BaseBdev2", 00:38:59.801 "uuid": "9aa4ec42-6a63-5e6c-a7fb-628f074781d2", 00:38:59.801 "is_configured": true, 00:38:59.801 "data_offset": 0, 00:38:59.801 "data_size": 65536 00:38:59.801 }, 00:38:59.801 { 00:38:59.801 "name": "BaseBdev3", 00:38:59.801 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:38:59.801 "is_configured": true, 00:38:59.801 "data_offset": 0, 00:38:59.801 "data_size": 65536 00:38:59.801 }, 00:38:59.801 { 00:38:59.801 "name": "BaseBdev4", 00:38:59.801 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:38:59.801 "is_configured": true, 00:38:59.801 "data_offset": 0, 00:38:59.801 "data_size": 65536 00:38:59.801 } 00:38:59.801 ] 00:38:59.801 }' 00:38:59.801 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:59.801 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:59.801 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:59.801 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:59.801 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:59.801 17:36:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:59.801 17:36:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:38:59.801 [2024-11-26 17:36:00.426276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:59.801 [2024-11-26 17:36:00.489143] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:59.801 [2024-11-26 17:36:00.489231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:59.801 [2024-11-26 17:36:00.489251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:59.801 [2024-11-26 17:36:00.489261] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:00.060 "name": "raid_bdev1", 00:39:00.060 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:39:00.060 "strip_size_kb": 0, 00:39:00.060 "state": "online", 00:39:00.060 "raid_level": "raid1", 00:39:00.060 "superblock": false, 00:39:00.060 "num_base_bdevs": 4, 00:39:00.060 "num_base_bdevs_discovered": 3, 00:39:00.060 "num_base_bdevs_operational": 3, 00:39:00.060 "base_bdevs_list": [ 00:39:00.060 { 00:39:00.060 "name": null, 00:39:00.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:00.060 "is_configured": false, 00:39:00.060 "data_offset": 0, 00:39:00.060 "data_size": 65536 00:39:00.060 }, 00:39:00.060 { 00:39:00.060 "name": "BaseBdev2", 00:39:00.060 "uuid": "9aa4ec42-6a63-5e6c-a7fb-628f074781d2", 00:39:00.060 "is_configured": true, 00:39:00.060 "data_offset": 0, 00:39:00.060 "data_size": 65536 00:39:00.060 }, 00:39:00.060 { 00:39:00.060 "name": "BaseBdev3", 00:39:00.060 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:39:00.060 "is_configured": true, 00:39:00.060 "data_offset": 0, 00:39:00.060 "data_size": 65536 00:39:00.060 }, 00:39:00.060 { 00:39:00.060 "name": "BaseBdev4", 00:39:00.060 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:39:00.060 "is_configured": true, 00:39:00.060 "data_offset": 0, 00:39:00.060 "data_size": 65536 00:39:00.060 } 00:39:00.060 ] 00:39:00.060 }' 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:00.060 17:36:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:00.630 "name": "raid_bdev1", 00:39:00.630 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:39:00.630 "strip_size_kb": 0, 00:39:00.630 "state": "online", 00:39:00.630 "raid_level": "raid1", 00:39:00.630 "superblock": false, 00:39:00.630 "num_base_bdevs": 4, 00:39:00.630 "num_base_bdevs_discovered": 3, 00:39:00.630 "num_base_bdevs_operational": 3, 00:39:00.630 "base_bdevs_list": [ 00:39:00.630 { 00:39:00.630 "name": null, 00:39:00.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:00.630 "is_configured": false, 00:39:00.630 "data_offset": 0, 00:39:00.630 "data_size": 65536 00:39:00.630 }, 00:39:00.630 { 00:39:00.630 "name": "BaseBdev2", 00:39:00.630 "uuid": "9aa4ec42-6a63-5e6c-a7fb-628f074781d2", 00:39:00.630 "is_configured": true, 00:39:00.630 "data_offset": 0, 00:39:00.630 "data_size": 65536 00:39:00.630 }, 00:39:00.630 { 00:39:00.630 "name": "BaseBdev3", 00:39:00.630 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:39:00.630 "is_configured": true, 00:39:00.630 "data_offset": 0, 00:39:00.630 "data_size": 65536 00:39:00.630 }, 00:39:00.630 { 00:39:00.630 "name": "BaseBdev4", 00:39:00.630 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:39:00.630 "is_configured": true, 00:39:00.630 "data_offset": 0, 00:39:00.630 "data_size": 65536 00:39:00.630 } 00:39:00.630 ] 00:39:00.630 }' 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:00.630 [2024-11-26 17:36:01.165403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:00.630 [2024-11-26 17:36:01.181652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.630 17:36:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:39:00.630 [2024-11-26 17:36:01.183812] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.567 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:01.567 "name": "raid_bdev1", 00:39:01.567 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:39:01.567 "strip_size_kb": 0, 00:39:01.567 "state": "online", 00:39:01.567 "raid_level": "raid1", 00:39:01.567 "superblock": false, 00:39:01.567 "num_base_bdevs": 4, 00:39:01.567 "num_base_bdevs_discovered": 4, 00:39:01.567 "num_base_bdevs_operational": 4, 00:39:01.567 "process": { 00:39:01.567 "type": "rebuild", 00:39:01.567 "target": "spare", 00:39:01.567 "progress": { 00:39:01.567 "blocks": 20480, 00:39:01.567 "percent": 31 00:39:01.567 } 00:39:01.567 }, 00:39:01.567 "base_bdevs_list": [ 00:39:01.567 { 00:39:01.567 "name": "spare", 00:39:01.567 "uuid": "0cf55e76-4625-58c7-8529-8d3e9fa332b5", 00:39:01.567 "is_configured": true, 00:39:01.568 "data_offset": 0, 00:39:01.568 "data_size": 65536 00:39:01.568 }, 00:39:01.568 { 00:39:01.568 "name": "BaseBdev2", 00:39:01.568 "uuid": "9aa4ec42-6a63-5e6c-a7fb-628f074781d2", 00:39:01.568 "is_configured": true, 00:39:01.568 "data_offset": 0, 00:39:01.568 "data_size": 65536 00:39:01.568 }, 00:39:01.568 { 00:39:01.568 "name": "BaseBdev3", 00:39:01.568 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:39:01.568 "is_configured": true, 00:39:01.568 "data_offset": 0, 00:39:01.568 "data_size": 65536 00:39:01.568 }, 00:39:01.568 { 00:39:01.568 "name": "BaseBdev4", 00:39:01.568 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:39:01.568 "is_configured": true, 00:39:01.568 "data_offset": 0, 00:39:01.568 "data_size": 65536 00:39:01.568 } 00:39:01.568 ] 00:39:01.568 }' 00:39:01.568 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:01.828 [2024-11-26 17:36:02.334770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:01.828 [2024-11-26 17:36:02.389726] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:01.828 "name": "raid_bdev1", 00:39:01.828 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:39:01.828 "strip_size_kb": 0, 00:39:01.828 "state": "online", 00:39:01.828 "raid_level": "raid1", 00:39:01.828 "superblock": false, 00:39:01.828 "num_base_bdevs": 4, 00:39:01.828 "num_base_bdevs_discovered": 3, 00:39:01.828 "num_base_bdevs_operational": 3, 00:39:01.828 "process": { 00:39:01.828 "type": "rebuild", 00:39:01.828 "target": "spare", 00:39:01.828 "progress": { 00:39:01.828 "blocks": 24576, 00:39:01.828 "percent": 37 00:39:01.828 } 00:39:01.828 }, 00:39:01.828 "base_bdevs_list": [ 00:39:01.828 { 00:39:01.828 "name": "spare", 00:39:01.828 "uuid": "0cf55e76-4625-58c7-8529-8d3e9fa332b5", 00:39:01.828 "is_configured": true, 00:39:01.828 "data_offset": 0, 00:39:01.828 "data_size": 65536 00:39:01.828 }, 00:39:01.828 { 00:39:01.828 "name": null, 00:39:01.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:01.828 "is_configured": false, 00:39:01.828 "data_offset": 0, 00:39:01.828 "data_size": 65536 00:39:01.828 }, 00:39:01.828 { 00:39:01.828 "name": "BaseBdev3", 00:39:01.828 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:39:01.828 "is_configured": true, 00:39:01.828 "data_offset": 0, 00:39:01.828 "data_size": 65536 00:39:01.828 }, 00:39:01.828 { 00:39:01.828 "name": "BaseBdev4", 00:39:01.828 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:39:01.828 "is_configured": true, 00:39:01.828 "data_offset": 0, 00:39:01.828 "data_size": 65536 00:39:01.828 } 00:39:01.828 ] 00:39:01.828 }' 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:01.828 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=457 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:02.087 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:02.087 "name": "raid_bdev1", 00:39:02.087 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:39:02.087 "strip_size_kb": 0, 00:39:02.087 "state": "online", 00:39:02.087 "raid_level": "raid1", 00:39:02.087 "superblock": false, 00:39:02.087 "num_base_bdevs": 4, 00:39:02.087 "num_base_bdevs_discovered": 3, 00:39:02.087 "num_base_bdevs_operational": 3, 00:39:02.087 "process": { 00:39:02.087 "type": "rebuild", 00:39:02.087 "target": "spare", 00:39:02.087 "progress": { 00:39:02.087 "blocks": 26624, 00:39:02.087 "percent": 40 00:39:02.087 } 00:39:02.087 }, 00:39:02.087 "base_bdevs_list": [ 00:39:02.087 { 00:39:02.087 "name": "spare", 00:39:02.087 "uuid": "0cf55e76-4625-58c7-8529-8d3e9fa332b5", 00:39:02.087 "is_configured": true, 00:39:02.087 "data_offset": 0, 00:39:02.087 "data_size": 65536 00:39:02.088 }, 00:39:02.088 { 00:39:02.088 "name": null, 00:39:02.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:02.088 "is_configured": false, 00:39:02.088 "data_offset": 0, 00:39:02.088 "data_size": 65536 00:39:02.088 }, 00:39:02.088 { 00:39:02.088 "name": "BaseBdev3", 00:39:02.088 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:39:02.088 "is_configured": true, 00:39:02.088 "data_offset": 0, 00:39:02.088 "data_size": 65536 00:39:02.088 }, 00:39:02.088 { 00:39:02.088 "name": "BaseBdev4", 00:39:02.088 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:39:02.088 "is_configured": true, 00:39:02.088 "data_offset": 0, 00:39:02.088 "data_size": 65536 00:39:02.088 } 00:39:02.088 ] 00:39:02.088 }' 00:39:02.088 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:02.088 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:02.088 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:02.088 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:02.088 17:36:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:03.027 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:03.027 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:03.028 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:03.028 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:03.028 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:03.028 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:03.028 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:03.028 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:03.028 17:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.028 17:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:03.028 17:36:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.286 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:03.286 "name": "raid_bdev1", 00:39:03.286 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:39:03.286 "strip_size_kb": 0, 00:39:03.286 "state": "online", 00:39:03.286 "raid_level": "raid1", 00:39:03.286 "superblock": false, 00:39:03.286 "num_base_bdevs": 4, 00:39:03.286 "num_base_bdevs_discovered": 3, 00:39:03.286 "num_base_bdevs_operational": 3, 00:39:03.286 "process": { 00:39:03.286 "type": "rebuild", 00:39:03.286 "target": "spare", 00:39:03.286 "progress": { 00:39:03.286 "blocks": 49152, 00:39:03.286 "percent": 75 00:39:03.286 } 00:39:03.286 }, 00:39:03.286 "base_bdevs_list": [ 00:39:03.286 { 00:39:03.286 "name": "spare", 00:39:03.286 "uuid": "0cf55e76-4625-58c7-8529-8d3e9fa332b5", 00:39:03.286 "is_configured": true, 00:39:03.286 "data_offset": 0, 00:39:03.286 "data_size": 65536 00:39:03.286 }, 00:39:03.286 { 00:39:03.286 "name": null, 00:39:03.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:03.286 "is_configured": false, 00:39:03.286 "data_offset": 0, 00:39:03.286 "data_size": 65536 00:39:03.286 }, 00:39:03.286 { 00:39:03.286 "name": "BaseBdev3", 00:39:03.286 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:39:03.286 "is_configured": true, 00:39:03.286 "data_offset": 0, 00:39:03.286 "data_size": 65536 00:39:03.286 }, 00:39:03.286 { 00:39:03.286 "name": "BaseBdev4", 00:39:03.286 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:39:03.286 "is_configured": true, 00:39:03.286 "data_offset": 0, 00:39:03.286 "data_size": 65536 00:39:03.286 } 00:39:03.286 ] 00:39:03.286 }' 00:39:03.286 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:03.286 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:03.286 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:03.286 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:03.286 17:36:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:03.853 [2024-11-26 17:36:04.399519] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:03.853 [2024-11-26 17:36:04.399620] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:03.853 [2024-11-26 17:36:04.399667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.422 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:04.422 "name": "raid_bdev1", 00:39:04.422 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:39:04.422 "strip_size_kb": 0, 00:39:04.422 "state": "online", 00:39:04.422 "raid_level": "raid1", 00:39:04.422 "superblock": false, 00:39:04.422 "num_base_bdevs": 4, 00:39:04.422 "num_base_bdevs_discovered": 3, 00:39:04.422 "num_base_bdevs_operational": 3, 00:39:04.422 "base_bdevs_list": [ 00:39:04.422 { 00:39:04.422 "name": "spare", 00:39:04.422 "uuid": "0cf55e76-4625-58c7-8529-8d3e9fa332b5", 00:39:04.422 "is_configured": true, 00:39:04.422 "data_offset": 0, 00:39:04.423 "data_size": 65536 00:39:04.423 }, 00:39:04.423 { 00:39:04.423 "name": null, 00:39:04.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:04.423 "is_configured": false, 00:39:04.423 "data_offset": 0, 00:39:04.423 "data_size": 65536 00:39:04.423 }, 00:39:04.423 { 00:39:04.423 "name": "BaseBdev3", 00:39:04.423 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:39:04.423 "is_configured": true, 00:39:04.423 "data_offset": 0, 00:39:04.423 "data_size": 65536 00:39:04.423 }, 00:39:04.423 { 00:39:04.423 "name": "BaseBdev4", 00:39:04.423 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:39:04.423 "is_configured": true, 00:39:04.423 "data_offset": 0, 00:39:04.423 "data_size": 65536 00:39:04.423 } 00:39:04.423 ] 00:39:04.423 }' 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:04.423 "name": "raid_bdev1", 00:39:04.423 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:39:04.423 "strip_size_kb": 0, 00:39:04.423 "state": "online", 00:39:04.423 "raid_level": "raid1", 00:39:04.423 "superblock": false, 00:39:04.423 "num_base_bdevs": 4, 00:39:04.423 "num_base_bdevs_discovered": 3, 00:39:04.423 "num_base_bdevs_operational": 3, 00:39:04.423 "base_bdevs_list": [ 00:39:04.423 { 00:39:04.423 "name": "spare", 00:39:04.423 "uuid": "0cf55e76-4625-58c7-8529-8d3e9fa332b5", 00:39:04.423 "is_configured": true, 00:39:04.423 "data_offset": 0, 00:39:04.423 "data_size": 65536 00:39:04.423 }, 00:39:04.423 { 00:39:04.423 "name": null, 00:39:04.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:04.423 "is_configured": false, 00:39:04.423 "data_offset": 0, 00:39:04.423 "data_size": 65536 00:39:04.423 }, 00:39:04.423 { 00:39:04.423 "name": "BaseBdev3", 00:39:04.423 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:39:04.423 "is_configured": true, 00:39:04.423 "data_offset": 0, 00:39:04.423 "data_size": 65536 00:39:04.423 }, 00:39:04.423 { 00:39:04.423 "name": "BaseBdev4", 00:39:04.423 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:39:04.423 "is_configured": true, 00:39:04.423 "data_offset": 0, 00:39:04.423 "data_size": 65536 00:39:04.423 } 00:39:04.423 ] 00:39:04.423 }' 00:39:04.423 17:36:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:04.423 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.686 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:04.686 "name": "raid_bdev1", 00:39:04.686 "uuid": "fc6c55e4-caba-4258-8861-6ef53bd6527f", 00:39:04.686 "strip_size_kb": 0, 00:39:04.686 "state": "online", 00:39:04.686 "raid_level": "raid1", 00:39:04.686 "superblock": false, 00:39:04.686 "num_base_bdevs": 4, 00:39:04.686 "num_base_bdevs_discovered": 3, 00:39:04.686 "num_base_bdevs_operational": 3, 00:39:04.686 "base_bdevs_list": [ 00:39:04.686 { 00:39:04.686 "name": "spare", 00:39:04.686 "uuid": "0cf55e76-4625-58c7-8529-8d3e9fa332b5", 00:39:04.686 "is_configured": true, 00:39:04.686 "data_offset": 0, 00:39:04.686 "data_size": 65536 00:39:04.686 }, 00:39:04.686 { 00:39:04.686 "name": null, 00:39:04.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:04.686 "is_configured": false, 00:39:04.686 "data_offset": 0, 00:39:04.686 "data_size": 65536 00:39:04.686 }, 00:39:04.686 { 00:39:04.686 "name": "BaseBdev3", 00:39:04.686 "uuid": "9e34f94e-72f6-5e6a-bf9c-b46829eff3cf", 00:39:04.686 "is_configured": true, 00:39:04.686 "data_offset": 0, 00:39:04.686 "data_size": 65536 00:39:04.686 }, 00:39:04.686 { 00:39:04.686 "name": "BaseBdev4", 00:39:04.686 "uuid": "728ba3df-fc6d-5017-a414-def19e90ed83", 00:39:04.686 "is_configured": true, 00:39:04.686 "data_offset": 0, 00:39:04.686 "data_size": 65536 00:39:04.686 } 00:39:04.686 ] 00:39:04.686 }' 00:39:04.686 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:04.686 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:04.945 [2024-11-26 17:36:05.545660] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:04.945 [2024-11-26 17:36:05.545698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:04.945 [2024-11-26 17:36:05.545797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:04.945 [2024-11-26 17:36:05.545885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:04.945 [2024-11-26 17:36:05.545903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:04.945 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:39:05.204 /dev/nbd0 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:05.204 1+0 records in 00:39:05.204 1+0 records out 00:39:05.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002377 s, 17.2 MB/s 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:05.204 17:36:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:39:05.463 /dev/nbd1 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:05.463 1+0 records in 00:39:05.463 1+0 records out 00:39:05.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434445 s, 9.4 MB/s 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:05.463 17:36:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:39:05.722 17:36:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:39:05.722 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:05.722 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:05.722 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:05.722 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:39:05.722 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:05.722 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:05.980 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:05.980 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:05.980 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:05.980 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:05.980 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:05.980 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:05.980 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:39:05.980 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:39:05.980 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:05.980 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77833 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77833 ']' 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77833 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77833 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:06.239 killing process with pid 77833 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77833' 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77833 00:39:06.239 Received shutdown signal, test time was about 60.000000 seconds 00:39:06.239 00:39:06.239 Latency(us) 00:39:06.239 [2024-11-26T17:36:06.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:06.239 [2024-11-26T17:36:06.934Z] =================================================================================================================== 00:39:06.239 [2024-11-26T17:36:06.934Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:06.239 [2024-11-26 17:36:06.850646] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:06.239 17:36:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77833 00:39:06.805 [2024-11-26 17:36:07.375719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:39:08.179 00:39:08.179 real 0m18.945s 00:39:08.179 user 0m21.088s 00:39:08.179 sys 0m3.597s 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:39:08.179 ************************************ 00:39:08.179 END TEST raid_rebuild_test 00:39:08.179 ************************************ 00:39:08.179 17:36:08 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:39:08.179 17:36:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:39:08.179 17:36:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:08.179 17:36:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:08.179 ************************************ 00:39:08.179 START TEST raid_rebuild_test_sb 00:39:08.179 ************************************ 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78292 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78292 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78292 ']' 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.179 17:36:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:08.179 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:08.179 Zero copy mechanism will not be used. 00:39:08.179 [2024-11-26 17:36:08.719308] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:39:08.180 [2024-11-26 17:36:08.719443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78292 ] 00:39:08.437 [2024-11-26 17:36:08.890828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.437 [2024-11-26 17:36:09.005538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.694 [2024-11-26 17:36:09.212309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:08.694 [2024-11-26 17:36:09.212361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:08.950 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.950 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:39:08.950 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:08.950 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:39:08.950 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.950 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:08.950 BaseBdev1_malloc 00:39:08.950 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.950 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:08.950 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.950 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:08.951 [2024-11-26 17:36:09.613884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:08.951 [2024-11-26 17:36:09.613956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:08.951 [2024-11-26 17:36:09.613979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:08.951 [2024-11-26 17:36:09.613997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:08.951 [2024-11-26 17:36:09.616220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:08.951 [2024-11-26 17:36:09.616262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:08.951 BaseBdev1 00:39:08.951 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.951 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:08.951 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:39:08.951 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.951 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.209 BaseBdev2_malloc 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.209 [2024-11-26 17:36:09.670748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:09.209 [2024-11-26 17:36:09.670828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:09.209 [2024-11-26 17:36:09.670851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:09.209 [2024-11-26 17:36:09.670863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:09.209 [2024-11-26 17:36:09.672958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:09.209 [2024-11-26 17:36:09.673016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:09.209 BaseBdev2 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.209 BaseBdev3_malloc 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.209 [2024-11-26 17:36:09.741522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:39:09.209 [2024-11-26 17:36:09.741615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:09.209 [2024-11-26 17:36:09.741649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:09.209 [2024-11-26 17:36:09.741660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:09.209 [2024-11-26 17:36:09.743691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:09.209 [2024-11-26 17:36:09.743728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:39:09.209 BaseBdev3 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.209 BaseBdev4_malloc 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.209 [2024-11-26 17:36:09.796993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:39:09.209 [2024-11-26 17:36:09.797057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:09.209 [2024-11-26 17:36:09.797082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:09.209 [2024-11-26 17:36:09.797095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:09.209 [2024-11-26 17:36:09.799468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:09.209 [2024-11-26 17:36:09.799508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:39:09.209 BaseBdev4 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.209 spare_malloc 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.209 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.210 spare_delay 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.210 [2024-11-26 17:36:09.867262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:09.210 [2024-11-26 17:36:09.867330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:09.210 [2024-11-26 17:36:09.867353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:39:09.210 [2024-11-26 17:36:09.867364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:09.210 [2024-11-26 17:36:09.869658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:09.210 [2024-11-26 17:36:09.869699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:09.210 spare 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.210 [2024-11-26 17:36:09.879269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:09.210 [2024-11-26 17:36:09.881272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:09.210 [2024-11-26 17:36:09.881349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:09.210 [2024-11-26 17:36:09.881408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:09.210 [2024-11-26 17:36:09.881629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:09.210 [2024-11-26 17:36:09.881653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:09.210 [2024-11-26 17:36:09.881948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:09.210 [2024-11-26 17:36:09.882154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:09.210 [2024-11-26 17:36:09.882174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:09.210 [2024-11-26 17:36:09.882367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.210 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.468 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.468 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:09.468 "name": "raid_bdev1", 00:39:09.468 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:09.468 "strip_size_kb": 0, 00:39:09.468 "state": "online", 00:39:09.468 "raid_level": "raid1", 00:39:09.468 "superblock": true, 00:39:09.468 "num_base_bdevs": 4, 00:39:09.468 "num_base_bdevs_discovered": 4, 00:39:09.468 "num_base_bdevs_operational": 4, 00:39:09.468 "base_bdevs_list": [ 00:39:09.468 { 00:39:09.468 "name": "BaseBdev1", 00:39:09.468 "uuid": "1a47f674-cdea-556b-8a0a-51572703889f", 00:39:09.468 "is_configured": true, 00:39:09.468 "data_offset": 2048, 00:39:09.468 "data_size": 63488 00:39:09.468 }, 00:39:09.468 { 00:39:09.468 "name": "BaseBdev2", 00:39:09.468 "uuid": "15293dcb-cbc7-51ae-ad61-b8bce10bfdb0", 00:39:09.468 "is_configured": true, 00:39:09.468 "data_offset": 2048, 00:39:09.468 "data_size": 63488 00:39:09.468 }, 00:39:09.468 { 00:39:09.468 "name": "BaseBdev3", 00:39:09.468 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:09.468 "is_configured": true, 00:39:09.468 "data_offset": 2048, 00:39:09.468 "data_size": 63488 00:39:09.468 }, 00:39:09.468 { 00:39:09.468 "name": "BaseBdev4", 00:39:09.468 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:09.468 "is_configured": true, 00:39:09.468 "data_offset": 2048, 00:39:09.468 "data_size": 63488 00:39:09.468 } 00:39:09.468 ] 00:39:09.468 }' 00:39:09.468 17:36:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:09.468 17:36:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:39:09.726 [2024-11-26 17:36:10.342848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:09.726 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:09.983 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:39:09.983 [2024-11-26 17:36:10.638023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:39:09.983 /dev/nbd0 00:39:09.984 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:10.240 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:10.241 1+0 records in 00:39:10.241 1+0 records out 00:39:10.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396807 s, 10.3 MB/s 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:39:10.241 17:36:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:39:15.585 63488+0 records in 00:39:15.585 63488+0 records out 00:39:15.585 32505856 bytes (33 MB, 31 MiB) copied, 5.45703 s, 6.0 MB/s 00:39:15.585 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:39:15.585 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:15.585 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:15.585 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:15.585 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:39:15.585 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:15.585 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:15.852 [2024-11-26 17:36:16.360224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:15.852 [2024-11-26 17:36:16.392267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:15.852 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:15.852 "name": "raid_bdev1", 00:39:15.852 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:15.852 "strip_size_kb": 0, 00:39:15.852 "state": "online", 00:39:15.852 "raid_level": "raid1", 00:39:15.852 "superblock": true, 00:39:15.852 "num_base_bdevs": 4, 00:39:15.852 "num_base_bdevs_discovered": 3, 00:39:15.852 "num_base_bdevs_operational": 3, 00:39:15.852 "base_bdevs_list": [ 00:39:15.852 { 00:39:15.852 "name": null, 00:39:15.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:15.852 "is_configured": false, 00:39:15.852 "data_offset": 0, 00:39:15.852 "data_size": 63488 00:39:15.852 }, 00:39:15.852 { 00:39:15.852 "name": "BaseBdev2", 00:39:15.853 "uuid": "15293dcb-cbc7-51ae-ad61-b8bce10bfdb0", 00:39:15.853 "is_configured": true, 00:39:15.853 "data_offset": 2048, 00:39:15.853 "data_size": 63488 00:39:15.853 }, 00:39:15.853 { 00:39:15.853 "name": "BaseBdev3", 00:39:15.853 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:15.853 "is_configured": true, 00:39:15.853 "data_offset": 2048, 00:39:15.853 "data_size": 63488 00:39:15.853 }, 00:39:15.853 { 00:39:15.853 "name": "BaseBdev4", 00:39:15.853 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:15.853 "is_configured": true, 00:39:15.853 "data_offset": 2048, 00:39:15.853 "data_size": 63488 00:39:15.853 } 00:39:15.853 ] 00:39:15.853 }' 00:39:15.853 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:15.853 17:36:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:16.420 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:16.420 17:36:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:16.420 17:36:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:16.420 [2024-11-26 17:36:16.851625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:16.420 [2024-11-26 17:36:16.868283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:39:16.420 17:36:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:16.420 17:36:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:39:16.420 [2024-11-26 17:36:16.870556] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:17.357 "name": "raid_bdev1", 00:39:17.357 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:17.357 "strip_size_kb": 0, 00:39:17.357 "state": "online", 00:39:17.357 "raid_level": "raid1", 00:39:17.357 "superblock": true, 00:39:17.357 "num_base_bdevs": 4, 00:39:17.357 "num_base_bdevs_discovered": 4, 00:39:17.357 "num_base_bdevs_operational": 4, 00:39:17.357 "process": { 00:39:17.357 "type": "rebuild", 00:39:17.357 "target": "spare", 00:39:17.357 "progress": { 00:39:17.357 "blocks": 20480, 00:39:17.357 "percent": 32 00:39:17.357 } 00:39:17.357 }, 00:39:17.357 "base_bdevs_list": [ 00:39:17.357 { 00:39:17.357 "name": "spare", 00:39:17.357 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:17.357 "is_configured": true, 00:39:17.357 "data_offset": 2048, 00:39:17.357 "data_size": 63488 00:39:17.357 }, 00:39:17.357 { 00:39:17.357 "name": "BaseBdev2", 00:39:17.357 "uuid": "15293dcb-cbc7-51ae-ad61-b8bce10bfdb0", 00:39:17.357 "is_configured": true, 00:39:17.357 "data_offset": 2048, 00:39:17.357 "data_size": 63488 00:39:17.357 }, 00:39:17.357 { 00:39:17.357 "name": "BaseBdev3", 00:39:17.357 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:17.357 "is_configured": true, 00:39:17.357 "data_offset": 2048, 00:39:17.357 "data_size": 63488 00:39:17.357 }, 00:39:17.357 { 00:39:17.357 "name": "BaseBdev4", 00:39:17.357 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:17.357 "is_configured": true, 00:39:17.357 "data_offset": 2048, 00:39:17.357 "data_size": 63488 00:39:17.357 } 00:39:17.357 ] 00:39:17.357 }' 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:17.357 17:36:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:17.357 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:17.357 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:17.357 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.357 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:17.357 [2024-11-26 17:36:18.021429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:17.617 [2024-11-26 17:36:18.081241] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:17.617 [2024-11-26 17:36:18.081349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:17.617 [2024-11-26 17:36:18.081385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:17.617 [2024-11-26 17:36:18.081398] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:17.617 "name": "raid_bdev1", 00:39:17.617 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:17.617 "strip_size_kb": 0, 00:39:17.617 "state": "online", 00:39:17.617 "raid_level": "raid1", 00:39:17.617 "superblock": true, 00:39:17.617 "num_base_bdevs": 4, 00:39:17.617 "num_base_bdevs_discovered": 3, 00:39:17.617 "num_base_bdevs_operational": 3, 00:39:17.617 "base_bdevs_list": [ 00:39:17.617 { 00:39:17.617 "name": null, 00:39:17.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:17.617 "is_configured": false, 00:39:17.617 "data_offset": 0, 00:39:17.617 "data_size": 63488 00:39:17.617 }, 00:39:17.617 { 00:39:17.617 "name": "BaseBdev2", 00:39:17.617 "uuid": "15293dcb-cbc7-51ae-ad61-b8bce10bfdb0", 00:39:17.617 "is_configured": true, 00:39:17.617 "data_offset": 2048, 00:39:17.617 "data_size": 63488 00:39:17.617 }, 00:39:17.617 { 00:39:17.617 "name": "BaseBdev3", 00:39:17.617 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:17.617 "is_configured": true, 00:39:17.617 "data_offset": 2048, 00:39:17.617 "data_size": 63488 00:39:17.617 }, 00:39:17.617 { 00:39:17.617 "name": "BaseBdev4", 00:39:17.617 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:17.617 "is_configured": true, 00:39:17.617 "data_offset": 2048, 00:39:17.617 "data_size": 63488 00:39:17.617 } 00:39:17.617 ] 00:39:17.617 }' 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:17.617 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:17.875 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:17.875 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:17.875 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:17.875 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:17.875 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:17.875 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:17.875 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:17.875 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.875 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:17.875 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.134 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:18.134 "name": "raid_bdev1", 00:39:18.134 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:18.134 "strip_size_kb": 0, 00:39:18.134 "state": "online", 00:39:18.134 "raid_level": "raid1", 00:39:18.134 "superblock": true, 00:39:18.134 "num_base_bdevs": 4, 00:39:18.134 "num_base_bdevs_discovered": 3, 00:39:18.134 "num_base_bdevs_operational": 3, 00:39:18.134 "base_bdevs_list": [ 00:39:18.134 { 00:39:18.134 "name": null, 00:39:18.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.134 "is_configured": false, 00:39:18.134 "data_offset": 0, 00:39:18.134 "data_size": 63488 00:39:18.134 }, 00:39:18.134 { 00:39:18.134 "name": "BaseBdev2", 00:39:18.134 "uuid": "15293dcb-cbc7-51ae-ad61-b8bce10bfdb0", 00:39:18.134 "is_configured": true, 00:39:18.135 "data_offset": 2048, 00:39:18.135 "data_size": 63488 00:39:18.135 }, 00:39:18.135 { 00:39:18.135 "name": "BaseBdev3", 00:39:18.135 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:18.135 "is_configured": true, 00:39:18.135 "data_offset": 2048, 00:39:18.135 "data_size": 63488 00:39:18.135 }, 00:39:18.135 { 00:39:18.135 "name": "BaseBdev4", 00:39:18.135 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:18.135 "is_configured": true, 00:39:18.135 "data_offset": 2048, 00:39:18.135 "data_size": 63488 00:39:18.135 } 00:39:18.135 ] 00:39:18.135 }' 00:39:18.135 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:18.135 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:18.135 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:18.135 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:18.135 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:18.135 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.135 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:18.135 [2024-11-26 17:36:18.678862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:18.135 [2024-11-26 17:36:18.694316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:39:18.135 17:36:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.135 17:36:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:39:18.135 [2024-11-26 17:36:18.696606] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.094 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:19.094 "name": "raid_bdev1", 00:39:19.094 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:19.094 "strip_size_kb": 0, 00:39:19.094 "state": "online", 00:39:19.094 "raid_level": "raid1", 00:39:19.094 "superblock": true, 00:39:19.094 "num_base_bdevs": 4, 00:39:19.094 "num_base_bdevs_discovered": 4, 00:39:19.094 "num_base_bdevs_operational": 4, 00:39:19.094 "process": { 00:39:19.094 "type": "rebuild", 00:39:19.094 "target": "spare", 00:39:19.094 "progress": { 00:39:19.094 "blocks": 20480, 00:39:19.094 "percent": 32 00:39:19.094 } 00:39:19.094 }, 00:39:19.094 "base_bdevs_list": [ 00:39:19.094 { 00:39:19.094 "name": "spare", 00:39:19.094 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:19.094 "is_configured": true, 00:39:19.094 "data_offset": 2048, 00:39:19.094 "data_size": 63488 00:39:19.094 }, 00:39:19.094 { 00:39:19.095 "name": "BaseBdev2", 00:39:19.095 "uuid": "15293dcb-cbc7-51ae-ad61-b8bce10bfdb0", 00:39:19.095 "is_configured": true, 00:39:19.095 "data_offset": 2048, 00:39:19.095 "data_size": 63488 00:39:19.095 }, 00:39:19.095 { 00:39:19.095 "name": "BaseBdev3", 00:39:19.095 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:19.095 "is_configured": true, 00:39:19.095 "data_offset": 2048, 00:39:19.095 "data_size": 63488 00:39:19.095 }, 00:39:19.095 { 00:39:19.095 "name": "BaseBdev4", 00:39:19.095 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:19.095 "is_configured": true, 00:39:19.095 "data_offset": 2048, 00:39:19.095 "data_size": 63488 00:39:19.095 } 00:39:19.095 ] 00:39:19.095 }' 00:39:19.095 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:39:19.355 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.355 17:36:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:19.355 [2024-11-26 17:36:19.844334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:19.355 [2024-11-26 17:36:20.007180] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:19.355 17:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:19.613 "name": "raid_bdev1", 00:39:19.613 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:19.613 "strip_size_kb": 0, 00:39:19.613 "state": "online", 00:39:19.613 "raid_level": "raid1", 00:39:19.613 "superblock": true, 00:39:19.613 "num_base_bdevs": 4, 00:39:19.613 "num_base_bdevs_discovered": 3, 00:39:19.613 "num_base_bdevs_operational": 3, 00:39:19.613 "process": { 00:39:19.613 "type": "rebuild", 00:39:19.613 "target": "spare", 00:39:19.613 "progress": { 00:39:19.613 "blocks": 24576, 00:39:19.613 "percent": 38 00:39:19.613 } 00:39:19.613 }, 00:39:19.613 "base_bdevs_list": [ 00:39:19.613 { 00:39:19.613 "name": "spare", 00:39:19.613 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:19.613 "is_configured": true, 00:39:19.613 "data_offset": 2048, 00:39:19.613 "data_size": 63488 00:39:19.613 }, 00:39:19.613 { 00:39:19.613 "name": null, 00:39:19.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:19.613 "is_configured": false, 00:39:19.613 "data_offset": 0, 00:39:19.613 "data_size": 63488 00:39:19.613 }, 00:39:19.613 { 00:39:19.613 "name": "BaseBdev3", 00:39:19.613 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:19.613 "is_configured": true, 00:39:19.613 "data_offset": 2048, 00:39:19.613 "data_size": 63488 00:39:19.613 }, 00:39:19.613 { 00:39:19.613 "name": "BaseBdev4", 00:39:19.613 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:19.613 "is_configured": true, 00:39:19.613 "data_offset": 2048, 00:39:19.613 "data_size": 63488 00:39:19.613 } 00:39:19.613 ] 00:39:19.613 }' 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=475 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.613 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:19.613 "name": "raid_bdev1", 00:39:19.613 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:19.613 "strip_size_kb": 0, 00:39:19.613 "state": "online", 00:39:19.613 "raid_level": "raid1", 00:39:19.614 "superblock": true, 00:39:19.614 "num_base_bdevs": 4, 00:39:19.614 "num_base_bdevs_discovered": 3, 00:39:19.614 "num_base_bdevs_operational": 3, 00:39:19.614 "process": { 00:39:19.614 "type": "rebuild", 00:39:19.614 "target": "spare", 00:39:19.614 "progress": { 00:39:19.614 "blocks": 26624, 00:39:19.614 "percent": 41 00:39:19.614 } 00:39:19.614 }, 00:39:19.614 "base_bdevs_list": [ 00:39:19.614 { 00:39:19.614 "name": "spare", 00:39:19.614 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:19.614 "is_configured": true, 00:39:19.614 "data_offset": 2048, 00:39:19.614 "data_size": 63488 00:39:19.614 }, 00:39:19.614 { 00:39:19.614 "name": null, 00:39:19.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:19.614 "is_configured": false, 00:39:19.614 "data_offset": 0, 00:39:19.614 "data_size": 63488 00:39:19.614 }, 00:39:19.614 { 00:39:19.614 "name": "BaseBdev3", 00:39:19.614 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:19.614 "is_configured": true, 00:39:19.614 "data_offset": 2048, 00:39:19.614 "data_size": 63488 00:39:19.614 }, 00:39:19.614 { 00:39:19.614 "name": "BaseBdev4", 00:39:19.614 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:19.614 "is_configured": true, 00:39:19.614 "data_offset": 2048, 00:39:19.614 "data_size": 63488 00:39:19.614 } 00:39:19.614 ] 00:39:19.614 }' 00:39:19.614 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:19.614 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:19.614 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:19.614 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:19.614 17:36:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:20.990 "name": "raid_bdev1", 00:39:20.990 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:20.990 "strip_size_kb": 0, 00:39:20.990 "state": "online", 00:39:20.990 "raid_level": "raid1", 00:39:20.990 "superblock": true, 00:39:20.990 "num_base_bdevs": 4, 00:39:20.990 "num_base_bdevs_discovered": 3, 00:39:20.990 "num_base_bdevs_operational": 3, 00:39:20.990 "process": { 00:39:20.990 "type": "rebuild", 00:39:20.990 "target": "spare", 00:39:20.990 "progress": { 00:39:20.990 "blocks": 49152, 00:39:20.990 "percent": 77 00:39:20.990 } 00:39:20.990 }, 00:39:20.990 "base_bdevs_list": [ 00:39:20.990 { 00:39:20.990 "name": "spare", 00:39:20.990 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:20.990 "is_configured": true, 00:39:20.990 "data_offset": 2048, 00:39:20.990 "data_size": 63488 00:39:20.990 }, 00:39:20.990 { 00:39:20.990 "name": null, 00:39:20.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:20.990 "is_configured": false, 00:39:20.990 "data_offset": 0, 00:39:20.990 "data_size": 63488 00:39:20.990 }, 00:39:20.990 { 00:39:20.990 "name": "BaseBdev3", 00:39:20.990 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:20.990 "is_configured": true, 00:39:20.990 "data_offset": 2048, 00:39:20.990 "data_size": 63488 00:39:20.990 }, 00:39:20.990 { 00:39:20.990 "name": "BaseBdev4", 00:39:20.990 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:20.990 "is_configured": true, 00:39:20.990 "data_offset": 2048, 00:39:20.990 "data_size": 63488 00:39:20.990 } 00:39:20.990 ] 00:39:20.990 }' 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:20.990 17:36:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:21.249 [2024-11-26 17:36:21.924054] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:21.249 [2024-11-26 17:36:21.924165] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:21.249 [2024-11-26 17:36:21.924346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:21.816 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:21.816 "name": "raid_bdev1", 00:39:21.816 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:21.816 "strip_size_kb": 0, 00:39:21.816 "state": "online", 00:39:21.816 "raid_level": "raid1", 00:39:21.816 "superblock": true, 00:39:21.817 "num_base_bdevs": 4, 00:39:21.817 "num_base_bdevs_discovered": 3, 00:39:21.817 "num_base_bdevs_operational": 3, 00:39:21.817 "base_bdevs_list": [ 00:39:21.817 { 00:39:21.817 "name": "spare", 00:39:21.817 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:21.817 "is_configured": true, 00:39:21.817 "data_offset": 2048, 00:39:21.817 "data_size": 63488 00:39:21.817 }, 00:39:21.817 { 00:39:21.817 "name": null, 00:39:21.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:21.817 "is_configured": false, 00:39:21.817 "data_offset": 0, 00:39:21.817 "data_size": 63488 00:39:21.817 }, 00:39:21.817 { 00:39:21.817 "name": "BaseBdev3", 00:39:21.817 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:21.817 "is_configured": true, 00:39:21.817 "data_offset": 2048, 00:39:21.817 "data_size": 63488 00:39:21.817 }, 00:39:21.817 { 00:39:21.817 "name": "BaseBdev4", 00:39:21.817 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:21.817 "is_configured": true, 00:39:21.817 "data_offset": 2048, 00:39:21.817 "data_size": 63488 00:39:21.817 } 00:39:21.817 ] 00:39:21.817 }' 00:39:21.817 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:21.817 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:21.817 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.082 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:22.082 "name": "raid_bdev1", 00:39:22.082 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:22.082 "strip_size_kb": 0, 00:39:22.082 "state": "online", 00:39:22.082 "raid_level": "raid1", 00:39:22.082 "superblock": true, 00:39:22.082 "num_base_bdevs": 4, 00:39:22.082 "num_base_bdevs_discovered": 3, 00:39:22.082 "num_base_bdevs_operational": 3, 00:39:22.082 "base_bdevs_list": [ 00:39:22.082 { 00:39:22.082 "name": "spare", 00:39:22.082 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:22.082 "is_configured": true, 00:39:22.082 "data_offset": 2048, 00:39:22.082 "data_size": 63488 00:39:22.082 }, 00:39:22.082 { 00:39:22.082 "name": null, 00:39:22.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:22.082 "is_configured": false, 00:39:22.082 "data_offset": 0, 00:39:22.082 "data_size": 63488 00:39:22.082 }, 00:39:22.082 { 00:39:22.082 "name": "BaseBdev3", 00:39:22.082 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:22.082 "is_configured": true, 00:39:22.082 "data_offset": 2048, 00:39:22.082 "data_size": 63488 00:39:22.082 }, 00:39:22.082 { 00:39:22.083 "name": "BaseBdev4", 00:39:22.083 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:22.083 "is_configured": true, 00:39:22.083 "data_offset": 2048, 00:39:22.083 "data_size": 63488 00:39:22.083 } 00:39:22.083 ] 00:39:22.083 }' 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:22.083 "name": "raid_bdev1", 00:39:22.083 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:22.083 "strip_size_kb": 0, 00:39:22.083 "state": "online", 00:39:22.083 "raid_level": "raid1", 00:39:22.083 "superblock": true, 00:39:22.083 "num_base_bdevs": 4, 00:39:22.083 "num_base_bdevs_discovered": 3, 00:39:22.083 "num_base_bdevs_operational": 3, 00:39:22.083 "base_bdevs_list": [ 00:39:22.083 { 00:39:22.083 "name": "spare", 00:39:22.083 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:22.083 "is_configured": true, 00:39:22.083 "data_offset": 2048, 00:39:22.083 "data_size": 63488 00:39:22.083 }, 00:39:22.083 { 00:39:22.083 "name": null, 00:39:22.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:22.083 "is_configured": false, 00:39:22.083 "data_offset": 0, 00:39:22.083 "data_size": 63488 00:39:22.083 }, 00:39:22.083 { 00:39:22.083 "name": "BaseBdev3", 00:39:22.083 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:22.083 "is_configured": true, 00:39:22.083 "data_offset": 2048, 00:39:22.083 "data_size": 63488 00:39:22.083 }, 00:39:22.083 { 00:39:22.083 "name": "BaseBdev4", 00:39:22.083 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:22.083 "is_configured": true, 00:39:22.083 "data_offset": 2048, 00:39:22.083 "data_size": 63488 00:39:22.083 } 00:39:22.083 ] 00:39:22.083 }' 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:22.083 17:36:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:22.663 [2024-11-26 17:36:23.120134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:22.663 [2024-11-26 17:36:23.120213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:22.663 [2024-11-26 17:36:23.120332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:22.663 [2024-11-26 17:36:23.120454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:22.663 [2024-11-26 17:36:23.120472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:22.663 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:39:22.922 /dev/nbd0 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:22.922 1+0 records in 00:39:22.922 1+0 records out 00:39:22.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416517 s, 9.8 MB/s 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:39:22.922 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:22.923 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:22.923 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:39:23.182 /dev/nbd1 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:23.182 1+0 records in 00:39:23.182 1+0 records out 00:39:23.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469254 s, 8.7 MB/s 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:23.182 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:23.441 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:39:23.441 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:23.441 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:23.441 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:23.441 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:39:23.441 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:23.441 17:36:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:23.701 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:23.960 [2024-11-26 17:36:24.416180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:23.960 [2024-11-26 17:36:24.416251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:23.960 [2024-11-26 17:36:24.416277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:39:23.960 [2024-11-26 17:36:24.416288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:23.960 [2024-11-26 17:36:24.418758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:23.960 [2024-11-26 17:36:24.418797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:23.960 [2024-11-26 17:36:24.418868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:23.960 [2024-11-26 17:36:24.418926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:23.960 [2024-11-26 17:36:24.419129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:23.960 [2024-11-26 17:36:24.419232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:23.960 spare 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:23.960 [2024-11-26 17:36:24.519142] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:39:23.960 [2024-11-26 17:36:24.519190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:23.960 [2024-11-26 17:36:24.519587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:39:23.960 [2024-11-26 17:36:24.519828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:39:23.960 [2024-11-26 17:36:24.519851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:39:23.960 [2024-11-26 17:36:24.520112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.960 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:23.961 "name": "raid_bdev1", 00:39:23.961 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:23.961 "strip_size_kb": 0, 00:39:23.961 "state": "online", 00:39:23.961 "raid_level": "raid1", 00:39:23.961 "superblock": true, 00:39:23.961 "num_base_bdevs": 4, 00:39:23.961 "num_base_bdevs_discovered": 3, 00:39:23.961 "num_base_bdevs_operational": 3, 00:39:23.961 "base_bdevs_list": [ 00:39:23.961 { 00:39:23.961 "name": "spare", 00:39:23.961 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:23.961 "is_configured": true, 00:39:23.961 "data_offset": 2048, 00:39:23.961 "data_size": 63488 00:39:23.961 }, 00:39:23.961 { 00:39:23.961 "name": null, 00:39:23.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:23.961 "is_configured": false, 00:39:23.961 "data_offset": 2048, 00:39:23.961 "data_size": 63488 00:39:23.961 }, 00:39:23.961 { 00:39:23.961 "name": "BaseBdev3", 00:39:23.961 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:23.961 "is_configured": true, 00:39:23.961 "data_offset": 2048, 00:39:23.961 "data_size": 63488 00:39:23.961 }, 00:39:23.961 { 00:39:23.961 "name": "BaseBdev4", 00:39:23.961 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:23.961 "is_configured": true, 00:39:23.961 "data_offset": 2048, 00:39:23.961 "data_size": 63488 00:39:23.961 } 00:39:23.961 ] 00:39:23.961 }' 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:23.961 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.530 17:36:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:24.530 "name": "raid_bdev1", 00:39:24.530 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:24.530 "strip_size_kb": 0, 00:39:24.530 "state": "online", 00:39:24.530 "raid_level": "raid1", 00:39:24.530 "superblock": true, 00:39:24.530 "num_base_bdevs": 4, 00:39:24.530 "num_base_bdevs_discovered": 3, 00:39:24.530 "num_base_bdevs_operational": 3, 00:39:24.530 "base_bdevs_list": [ 00:39:24.530 { 00:39:24.530 "name": "spare", 00:39:24.530 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:24.530 "is_configured": true, 00:39:24.530 "data_offset": 2048, 00:39:24.530 "data_size": 63488 00:39:24.530 }, 00:39:24.530 { 00:39:24.530 "name": null, 00:39:24.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.530 "is_configured": false, 00:39:24.530 "data_offset": 2048, 00:39:24.530 "data_size": 63488 00:39:24.530 }, 00:39:24.530 { 00:39:24.530 "name": "BaseBdev3", 00:39:24.530 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:24.530 "is_configured": true, 00:39:24.530 "data_offset": 2048, 00:39:24.530 "data_size": 63488 00:39:24.530 }, 00:39:24.530 { 00:39:24.530 "name": "BaseBdev4", 00:39:24.530 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:24.530 "is_configured": true, 00:39:24.530 "data_offset": 2048, 00:39:24.530 "data_size": 63488 00:39:24.530 } 00:39:24.530 ] 00:39:24.530 }' 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.530 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:24.530 [2024-11-26 17:36:25.115180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:24.531 "name": "raid_bdev1", 00:39:24.531 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:24.531 "strip_size_kb": 0, 00:39:24.531 "state": "online", 00:39:24.531 "raid_level": "raid1", 00:39:24.531 "superblock": true, 00:39:24.531 "num_base_bdevs": 4, 00:39:24.531 "num_base_bdevs_discovered": 2, 00:39:24.531 "num_base_bdevs_operational": 2, 00:39:24.531 "base_bdevs_list": [ 00:39:24.531 { 00:39:24.531 "name": null, 00:39:24.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.531 "is_configured": false, 00:39:24.531 "data_offset": 0, 00:39:24.531 "data_size": 63488 00:39:24.531 }, 00:39:24.531 { 00:39:24.531 "name": null, 00:39:24.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.531 "is_configured": false, 00:39:24.531 "data_offset": 2048, 00:39:24.531 "data_size": 63488 00:39:24.531 }, 00:39:24.531 { 00:39:24.531 "name": "BaseBdev3", 00:39:24.531 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:24.531 "is_configured": true, 00:39:24.531 "data_offset": 2048, 00:39:24.531 "data_size": 63488 00:39:24.531 }, 00:39:24.531 { 00:39:24.531 "name": "BaseBdev4", 00:39:24.531 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:24.531 "is_configured": true, 00:39:24.531 "data_offset": 2048, 00:39:24.531 "data_size": 63488 00:39:24.531 } 00:39:24.531 ] 00:39:24.531 }' 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:24.531 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:25.100 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:25.100 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.100 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:25.100 [2024-11-26 17:36:25.586391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:25.100 [2024-11-26 17:36:25.586626] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:39:25.100 [2024-11-26 17:36:25.586654] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:25.100 [2024-11-26 17:36:25.586690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:25.100 [2024-11-26 17:36:25.602665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:39:25.100 17:36:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.100 17:36:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:39:25.100 [2024-11-26 17:36:25.604748] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:26.038 "name": "raid_bdev1", 00:39:26.038 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:26.038 "strip_size_kb": 0, 00:39:26.038 "state": "online", 00:39:26.038 "raid_level": "raid1", 00:39:26.038 "superblock": true, 00:39:26.038 "num_base_bdevs": 4, 00:39:26.038 "num_base_bdevs_discovered": 3, 00:39:26.038 "num_base_bdevs_operational": 3, 00:39:26.038 "process": { 00:39:26.038 "type": "rebuild", 00:39:26.038 "target": "spare", 00:39:26.038 "progress": { 00:39:26.038 "blocks": 20480, 00:39:26.038 "percent": 32 00:39:26.038 } 00:39:26.038 }, 00:39:26.038 "base_bdevs_list": [ 00:39:26.038 { 00:39:26.038 "name": "spare", 00:39:26.038 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:26.038 "is_configured": true, 00:39:26.038 "data_offset": 2048, 00:39:26.038 "data_size": 63488 00:39:26.038 }, 00:39:26.038 { 00:39:26.038 "name": null, 00:39:26.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.038 "is_configured": false, 00:39:26.038 "data_offset": 2048, 00:39:26.038 "data_size": 63488 00:39:26.038 }, 00:39:26.038 { 00:39:26.038 "name": "BaseBdev3", 00:39:26.038 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:26.038 "is_configured": true, 00:39:26.038 "data_offset": 2048, 00:39:26.038 "data_size": 63488 00:39:26.038 }, 00:39:26.038 { 00:39:26.038 "name": "BaseBdev4", 00:39:26.038 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:26.038 "is_configured": true, 00:39:26.038 "data_offset": 2048, 00:39:26.038 "data_size": 63488 00:39:26.038 } 00:39:26.038 ] 00:39:26.038 }' 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:26.038 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:26.298 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:26.298 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:39:26.298 17:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.298 17:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:26.298 [2024-11-26 17:36:26.772031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:26.298 [2024-11-26 17:36:26.810239] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:26.298 [2024-11-26 17:36:26.810305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:26.298 [2024-11-26 17:36:26.810324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:26.298 [2024-11-26 17:36:26.810331] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:26.298 17:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.298 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:26.298 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:26.298 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:26.298 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:26.299 "name": "raid_bdev1", 00:39:26.299 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:26.299 "strip_size_kb": 0, 00:39:26.299 "state": "online", 00:39:26.299 "raid_level": "raid1", 00:39:26.299 "superblock": true, 00:39:26.299 "num_base_bdevs": 4, 00:39:26.299 "num_base_bdevs_discovered": 2, 00:39:26.299 "num_base_bdevs_operational": 2, 00:39:26.299 "base_bdevs_list": [ 00:39:26.299 { 00:39:26.299 "name": null, 00:39:26.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.299 "is_configured": false, 00:39:26.299 "data_offset": 0, 00:39:26.299 "data_size": 63488 00:39:26.299 }, 00:39:26.299 { 00:39:26.299 "name": null, 00:39:26.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.299 "is_configured": false, 00:39:26.299 "data_offset": 2048, 00:39:26.299 "data_size": 63488 00:39:26.299 }, 00:39:26.299 { 00:39:26.299 "name": "BaseBdev3", 00:39:26.299 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:26.299 "is_configured": true, 00:39:26.299 "data_offset": 2048, 00:39:26.299 "data_size": 63488 00:39:26.299 }, 00:39:26.299 { 00:39:26.299 "name": "BaseBdev4", 00:39:26.299 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:26.299 "is_configured": true, 00:39:26.299 "data_offset": 2048, 00:39:26.299 "data_size": 63488 00:39:26.299 } 00:39:26.299 ] 00:39:26.299 }' 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:26.299 17:36:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:26.869 17:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:26.869 17:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.869 17:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:26.869 [2024-11-26 17:36:27.308511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:26.869 [2024-11-26 17:36:27.308602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:26.869 [2024-11-26 17:36:27.308642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:39:26.869 [2024-11-26 17:36:27.308656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:26.869 [2024-11-26 17:36:27.309177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:26.869 [2024-11-26 17:36:27.309206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:26.869 [2024-11-26 17:36:27.309314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:26.869 [2024-11-26 17:36:27.309335] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:39:26.869 [2024-11-26 17:36:27.309350] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:26.869 [2024-11-26 17:36:27.309372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:26.869 [2024-11-26 17:36:27.326074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:39:26.869 spare 00:39:26.869 17:36:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.869 17:36:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:39:26.869 [2024-11-26 17:36:27.328110] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:27.809 "name": "raid_bdev1", 00:39:27.809 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:27.809 "strip_size_kb": 0, 00:39:27.809 "state": "online", 00:39:27.809 "raid_level": "raid1", 00:39:27.809 "superblock": true, 00:39:27.809 "num_base_bdevs": 4, 00:39:27.809 "num_base_bdevs_discovered": 3, 00:39:27.809 "num_base_bdevs_operational": 3, 00:39:27.809 "process": { 00:39:27.809 "type": "rebuild", 00:39:27.809 "target": "spare", 00:39:27.809 "progress": { 00:39:27.809 "blocks": 20480, 00:39:27.809 "percent": 32 00:39:27.809 } 00:39:27.809 }, 00:39:27.809 "base_bdevs_list": [ 00:39:27.809 { 00:39:27.809 "name": "spare", 00:39:27.809 "uuid": "32889ee8-15c6-5f9f-939b-3eb0001b16ea", 00:39:27.809 "is_configured": true, 00:39:27.809 "data_offset": 2048, 00:39:27.809 "data_size": 63488 00:39:27.809 }, 00:39:27.809 { 00:39:27.809 "name": null, 00:39:27.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:27.809 "is_configured": false, 00:39:27.809 "data_offset": 2048, 00:39:27.809 "data_size": 63488 00:39:27.809 }, 00:39:27.809 { 00:39:27.809 "name": "BaseBdev3", 00:39:27.809 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:27.809 "is_configured": true, 00:39:27.809 "data_offset": 2048, 00:39:27.809 "data_size": 63488 00:39:27.809 }, 00:39:27.809 { 00:39:27.809 "name": "BaseBdev4", 00:39:27.809 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:27.809 "is_configured": true, 00:39:27.809 "data_offset": 2048, 00:39:27.809 "data_size": 63488 00:39:27.809 } 00:39:27.809 ] 00:39:27.809 }' 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.809 17:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:27.809 [2024-11-26 17:36:28.471701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:28.069 [2024-11-26 17:36:28.534038] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:28.069 [2024-11-26 17:36:28.534124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:28.069 [2024-11-26 17:36:28.534141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:28.069 [2024-11-26 17:36:28.534150] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:28.069 "name": "raid_bdev1", 00:39:28.069 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:28.069 "strip_size_kb": 0, 00:39:28.069 "state": "online", 00:39:28.069 "raid_level": "raid1", 00:39:28.069 "superblock": true, 00:39:28.069 "num_base_bdevs": 4, 00:39:28.069 "num_base_bdevs_discovered": 2, 00:39:28.069 "num_base_bdevs_operational": 2, 00:39:28.069 "base_bdevs_list": [ 00:39:28.069 { 00:39:28.069 "name": null, 00:39:28.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.069 "is_configured": false, 00:39:28.069 "data_offset": 0, 00:39:28.069 "data_size": 63488 00:39:28.069 }, 00:39:28.069 { 00:39:28.069 "name": null, 00:39:28.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.069 "is_configured": false, 00:39:28.069 "data_offset": 2048, 00:39:28.069 "data_size": 63488 00:39:28.069 }, 00:39:28.069 { 00:39:28.069 "name": "BaseBdev3", 00:39:28.069 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:28.069 "is_configured": true, 00:39:28.069 "data_offset": 2048, 00:39:28.069 "data_size": 63488 00:39:28.069 }, 00:39:28.069 { 00:39:28.069 "name": "BaseBdev4", 00:39:28.069 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:28.069 "is_configured": true, 00:39:28.069 "data_offset": 2048, 00:39:28.069 "data_size": 63488 00:39:28.069 } 00:39:28.069 ] 00:39:28.069 }' 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:28.069 17:36:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:28.639 "name": "raid_bdev1", 00:39:28.639 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:28.639 "strip_size_kb": 0, 00:39:28.639 "state": "online", 00:39:28.639 "raid_level": "raid1", 00:39:28.639 "superblock": true, 00:39:28.639 "num_base_bdevs": 4, 00:39:28.639 "num_base_bdevs_discovered": 2, 00:39:28.639 "num_base_bdevs_operational": 2, 00:39:28.639 "base_bdevs_list": [ 00:39:28.639 { 00:39:28.639 "name": null, 00:39:28.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.639 "is_configured": false, 00:39:28.639 "data_offset": 0, 00:39:28.639 "data_size": 63488 00:39:28.639 }, 00:39:28.639 { 00:39:28.639 "name": null, 00:39:28.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:28.639 "is_configured": false, 00:39:28.639 "data_offset": 2048, 00:39:28.639 "data_size": 63488 00:39:28.639 }, 00:39:28.639 { 00:39:28.639 "name": "BaseBdev3", 00:39:28.639 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:28.639 "is_configured": true, 00:39:28.639 "data_offset": 2048, 00:39:28.639 "data_size": 63488 00:39:28.639 }, 00:39:28.639 { 00:39:28.639 "name": "BaseBdev4", 00:39:28.639 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:28.639 "is_configured": true, 00:39:28.639 "data_offset": 2048, 00:39:28.639 "data_size": 63488 00:39:28.639 } 00:39:28.639 ] 00:39:28.639 }' 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:28.639 [2024-11-26 17:36:29.191864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:28.639 [2024-11-26 17:36:29.191943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:28.639 [2024-11-26 17:36:29.191968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:39:28.639 [2024-11-26 17:36:29.191980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:28.639 [2024-11-26 17:36:29.192480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:28.639 [2024-11-26 17:36:29.192529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:28.639 [2024-11-26 17:36:29.192624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:28.639 [2024-11-26 17:36:29.192647] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:39:28.639 [2024-11-26 17:36:29.192659] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:28.639 [2024-11-26 17:36:29.192686] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:39:28.639 BaseBdev1 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.639 17:36:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:29.578 "name": "raid_bdev1", 00:39:29.578 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:29.578 "strip_size_kb": 0, 00:39:29.578 "state": "online", 00:39:29.578 "raid_level": "raid1", 00:39:29.578 "superblock": true, 00:39:29.578 "num_base_bdevs": 4, 00:39:29.578 "num_base_bdevs_discovered": 2, 00:39:29.578 "num_base_bdevs_operational": 2, 00:39:29.578 "base_bdevs_list": [ 00:39:29.578 { 00:39:29.578 "name": null, 00:39:29.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.578 "is_configured": false, 00:39:29.578 "data_offset": 0, 00:39:29.578 "data_size": 63488 00:39:29.578 }, 00:39:29.578 { 00:39:29.578 "name": null, 00:39:29.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:29.578 "is_configured": false, 00:39:29.578 "data_offset": 2048, 00:39:29.578 "data_size": 63488 00:39:29.578 }, 00:39:29.578 { 00:39:29.578 "name": "BaseBdev3", 00:39:29.578 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:29.578 "is_configured": true, 00:39:29.578 "data_offset": 2048, 00:39:29.578 "data_size": 63488 00:39:29.578 }, 00:39:29.578 { 00:39:29.578 "name": "BaseBdev4", 00:39:29.578 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:29.578 "is_configured": true, 00:39:29.578 "data_offset": 2048, 00:39:29.578 "data_size": 63488 00:39:29.578 } 00:39:29.578 ] 00:39:29.578 }' 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:29.578 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:30.148 "name": "raid_bdev1", 00:39:30.148 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:30.148 "strip_size_kb": 0, 00:39:30.148 "state": "online", 00:39:30.148 "raid_level": "raid1", 00:39:30.148 "superblock": true, 00:39:30.148 "num_base_bdevs": 4, 00:39:30.148 "num_base_bdevs_discovered": 2, 00:39:30.148 "num_base_bdevs_operational": 2, 00:39:30.148 "base_bdevs_list": [ 00:39:30.148 { 00:39:30.148 "name": null, 00:39:30.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:30.148 "is_configured": false, 00:39:30.148 "data_offset": 0, 00:39:30.148 "data_size": 63488 00:39:30.148 }, 00:39:30.148 { 00:39:30.148 "name": null, 00:39:30.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:30.148 "is_configured": false, 00:39:30.148 "data_offset": 2048, 00:39:30.148 "data_size": 63488 00:39:30.148 }, 00:39:30.148 { 00:39:30.148 "name": "BaseBdev3", 00:39:30.148 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:30.148 "is_configured": true, 00:39:30.148 "data_offset": 2048, 00:39:30.148 "data_size": 63488 00:39:30.148 }, 00:39:30.148 { 00:39:30.148 "name": "BaseBdev4", 00:39:30.148 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:30.148 "is_configured": true, 00:39:30.148 "data_offset": 2048, 00:39:30.148 "data_size": 63488 00:39:30.148 } 00:39:30.148 ] 00:39:30.148 }' 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:30.148 [2024-11-26 17:36:30.805154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:30.148 [2024-11-26 17:36:30.805372] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:39:30.148 [2024-11-26 17:36:30.805395] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:30.148 request: 00:39:30.148 { 00:39:30.148 "base_bdev": "BaseBdev1", 00:39:30.148 "raid_bdev": "raid_bdev1", 00:39:30.148 "method": "bdev_raid_add_base_bdev", 00:39:30.148 "req_id": 1 00:39:30.148 } 00:39:30.148 Got JSON-RPC error response 00:39:30.148 response: 00:39:30.148 { 00:39:30.148 "code": -22, 00:39:30.148 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:30.148 } 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:30.148 17:36:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:31.524 "name": "raid_bdev1", 00:39:31.524 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:31.524 "strip_size_kb": 0, 00:39:31.524 "state": "online", 00:39:31.524 "raid_level": "raid1", 00:39:31.524 "superblock": true, 00:39:31.524 "num_base_bdevs": 4, 00:39:31.524 "num_base_bdevs_discovered": 2, 00:39:31.524 "num_base_bdevs_operational": 2, 00:39:31.524 "base_bdevs_list": [ 00:39:31.524 { 00:39:31.524 "name": null, 00:39:31.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:31.524 "is_configured": false, 00:39:31.524 "data_offset": 0, 00:39:31.524 "data_size": 63488 00:39:31.524 }, 00:39:31.524 { 00:39:31.524 "name": null, 00:39:31.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:31.524 "is_configured": false, 00:39:31.524 "data_offset": 2048, 00:39:31.524 "data_size": 63488 00:39:31.524 }, 00:39:31.524 { 00:39:31.524 "name": "BaseBdev3", 00:39:31.524 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:31.524 "is_configured": true, 00:39:31.524 "data_offset": 2048, 00:39:31.524 "data_size": 63488 00:39:31.524 }, 00:39:31.524 { 00:39:31.524 "name": "BaseBdev4", 00:39:31.524 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:31.524 "is_configured": true, 00:39:31.524 "data_offset": 2048, 00:39:31.524 "data_size": 63488 00:39:31.524 } 00:39:31.524 ] 00:39:31.524 }' 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:31.524 17:36:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:31.782 "name": "raid_bdev1", 00:39:31.782 "uuid": "6680d805-cc21-43bc-a1df-3a99966ad433", 00:39:31.782 "strip_size_kb": 0, 00:39:31.782 "state": "online", 00:39:31.782 "raid_level": "raid1", 00:39:31.782 "superblock": true, 00:39:31.782 "num_base_bdevs": 4, 00:39:31.782 "num_base_bdevs_discovered": 2, 00:39:31.782 "num_base_bdevs_operational": 2, 00:39:31.782 "base_bdevs_list": [ 00:39:31.782 { 00:39:31.782 "name": null, 00:39:31.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:31.782 "is_configured": false, 00:39:31.782 "data_offset": 0, 00:39:31.782 "data_size": 63488 00:39:31.782 }, 00:39:31.782 { 00:39:31.782 "name": null, 00:39:31.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:31.782 "is_configured": false, 00:39:31.782 "data_offset": 2048, 00:39:31.782 "data_size": 63488 00:39:31.782 }, 00:39:31.782 { 00:39:31.782 "name": "BaseBdev3", 00:39:31.782 "uuid": "745afd0a-fcd0-5a01-bcc7-37a5f0d2cc8d", 00:39:31.782 "is_configured": true, 00:39:31.782 "data_offset": 2048, 00:39:31.782 "data_size": 63488 00:39:31.782 }, 00:39:31.782 { 00:39:31.782 "name": "BaseBdev4", 00:39:31.782 "uuid": "39771b43-da32-51c6-8dcc-8126515c8778", 00:39:31.782 "is_configured": true, 00:39:31.782 "data_offset": 2048, 00:39:31.782 "data_size": 63488 00:39:31.782 } 00:39:31.782 ] 00:39:31.782 }' 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78292 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78292 ']' 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78292 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78292 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:31.782 killing process with pid 78292 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78292' 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78292 00:39:31.782 Received shutdown signal, test time was about 60.000000 seconds 00:39:31.782 00:39:31.782 Latency(us) 00:39:31.782 [2024-11-26T17:36:32.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.782 [2024-11-26T17:36:32.477Z] =================================================================================================================== 00:39:31.782 [2024-11-26T17:36:32.477Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:31.782 [2024-11-26 17:36:32.429207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:31.782 [2024-11-26 17:36:32.429343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:31.782 17:36:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78292 00:39:31.782 [2024-11-26 17:36:32.429439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:31.782 [2024-11-26 17:36:32.429452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:39:32.348 [2024-11-26 17:36:32.925224] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:33.723 17:36:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:39:33.723 00:39:33.723 real 0m25.440s 00:39:33.723 user 0m30.814s 00:39:33.723 sys 0m3.734s 00:39:33.723 17:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:33.723 17:36:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:39:33.723 ************************************ 00:39:33.723 END TEST raid_rebuild_test_sb 00:39:33.723 ************************************ 00:39:33.723 17:36:34 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:39:33.723 17:36:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:39:33.724 17:36:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:33.724 17:36:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:33.724 ************************************ 00:39:33.724 START TEST raid_rebuild_test_io 00:39:33.724 ************************************ 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79051 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79051 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79051 ']' 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:33.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:33.724 17:36:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:33.724 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:33.724 Zero copy mechanism will not be used. 00:39:33.724 [2024-11-26 17:36:34.231717] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:39:33.724 [2024-11-26 17:36:34.231849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79051 ] 00:39:33.724 [2024-11-26 17:36:34.407113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:33.983 [2024-11-26 17:36:34.531528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.241 [2024-11-26 17:36:34.749430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:34.241 [2024-11-26 17:36:34.749519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.499 BaseBdev1_malloc 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.499 [2024-11-26 17:36:35.129474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:34.499 [2024-11-26 17:36:35.129552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:34.499 [2024-11-26 17:36:35.129575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:34.499 [2024-11-26 17:36:35.129588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:34.499 [2024-11-26 17:36:35.131625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:34.499 [2024-11-26 17:36:35.131667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:34.499 BaseBdev1 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.499 BaseBdev2_malloc 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.499 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.499 [2024-11-26 17:36:35.185895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:34.499 [2024-11-26 17:36:35.185956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:34.499 [2024-11-26 17:36:35.185979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:34.499 [2024-11-26 17:36:35.185990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:34.499 [2024-11-26 17:36:35.188069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:34.499 [2024-11-26 17:36:35.188107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:34.499 BaseBdev2 00:39:34.500 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.500 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:34.500 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:39:34.500 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.500 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.759 BaseBdev3_malloc 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.759 [2024-11-26 17:36:35.257481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:39:34.759 [2024-11-26 17:36:35.257566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:34.759 [2024-11-26 17:36:35.257599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:34.759 [2024-11-26 17:36:35.257611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:34.759 [2024-11-26 17:36:35.259726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:34.759 [2024-11-26 17:36:35.259764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:39:34.759 BaseBdev3 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.759 BaseBdev4_malloc 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.759 [2024-11-26 17:36:35.313878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:39:34.759 [2024-11-26 17:36:35.313940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:34.759 [2024-11-26 17:36:35.313959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:34.759 [2024-11-26 17:36:35.313970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:34.759 [2024-11-26 17:36:35.315979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:34.759 [2024-11-26 17:36:35.316017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:39:34.759 BaseBdev4 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.759 spare_malloc 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.759 spare_delay 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.759 [2024-11-26 17:36:35.383181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:34.759 [2024-11-26 17:36:35.383241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:34.759 [2024-11-26 17:36:35.383260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:39:34.759 [2024-11-26 17:36:35.383272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:34.759 [2024-11-26 17:36:35.385434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:34.759 [2024-11-26 17:36:35.385477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:34.759 spare 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.759 [2024-11-26 17:36:35.395202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:34.759 [2024-11-26 17:36:35.397299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:34.759 [2024-11-26 17:36:35.397395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:34.759 [2024-11-26 17:36:35.397454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:34.759 [2024-11-26 17:36:35.397562] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:34.759 [2024-11-26 17:36:35.397583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:39:34.759 [2024-11-26 17:36:35.397875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:34.759 [2024-11-26 17:36:35.398063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:34.759 [2024-11-26 17:36:35.398084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:34.759 [2024-11-26 17:36:35.398244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:34.759 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.018 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:35.018 "name": "raid_bdev1", 00:39:35.018 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:35.018 "strip_size_kb": 0, 00:39:35.018 "state": "online", 00:39:35.018 "raid_level": "raid1", 00:39:35.018 "superblock": false, 00:39:35.018 "num_base_bdevs": 4, 00:39:35.018 "num_base_bdevs_discovered": 4, 00:39:35.018 "num_base_bdevs_operational": 4, 00:39:35.018 "base_bdevs_list": [ 00:39:35.018 { 00:39:35.018 "name": "BaseBdev1", 00:39:35.018 "uuid": "ad850b91-5a6e-51e3-99c7-3727a420120f", 00:39:35.018 "is_configured": true, 00:39:35.018 "data_offset": 0, 00:39:35.018 "data_size": 65536 00:39:35.018 }, 00:39:35.018 { 00:39:35.018 "name": "BaseBdev2", 00:39:35.018 "uuid": "d842e16a-b0e9-55f9-95a2-3d948f01b716", 00:39:35.018 "is_configured": true, 00:39:35.018 "data_offset": 0, 00:39:35.018 "data_size": 65536 00:39:35.018 }, 00:39:35.018 { 00:39:35.018 "name": "BaseBdev3", 00:39:35.018 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:35.018 "is_configured": true, 00:39:35.018 "data_offset": 0, 00:39:35.018 "data_size": 65536 00:39:35.018 }, 00:39:35.018 { 00:39:35.018 "name": "BaseBdev4", 00:39:35.018 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:35.018 "is_configured": true, 00:39:35.018 "data_offset": 0, 00:39:35.018 "data_size": 65536 00:39:35.018 } 00:39:35.018 ] 00:39:35.018 }' 00:39:35.018 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:35.018 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:39:35.277 [2024-11-26 17:36:35.866838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:35.277 [2024-11-26 17:36:35.962221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:35.277 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:35.536 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:35.537 17:36:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:35.537 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.537 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:35.537 17:36:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.537 17:36:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:35.537 "name": "raid_bdev1", 00:39:35.537 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:35.537 "strip_size_kb": 0, 00:39:35.537 "state": "online", 00:39:35.537 "raid_level": "raid1", 00:39:35.537 "superblock": false, 00:39:35.537 "num_base_bdevs": 4, 00:39:35.537 "num_base_bdevs_discovered": 3, 00:39:35.537 "num_base_bdevs_operational": 3, 00:39:35.537 "base_bdevs_list": [ 00:39:35.537 { 00:39:35.537 "name": null, 00:39:35.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:35.537 "is_configured": false, 00:39:35.537 "data_offset": 0, 00:39:35.537 "data_size": 65536 00:39:35.537 }, 00:39:35.537 { 00:39:35.537 "name": "BaseBdev2", 00:39:35.537 "uuid": "d842e16a-b0e9-55f9-95a2-3d948f01b716", 00:39:35.537 "is_configured": true, 00:39:35.537 "data_offset": 0, 00:39:35.537 "data_size": 65536 00:39:35.537 }, 00:39:35.537 { 00:39:35.537 "name": "BaseBdev3", 00:39:35.537 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:35.537 "is_configured": true, 00:39:35.537 "data_offset": 0, 00:39:35.537 "data_size": 65536 00:39:35.537 }, 00:39:35.537 { 00:39:35.537 "name": "BaseBdev4", 00:39:35.537 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:35.537 "is_configured": true, 00:39:35.537 "data_offset": 0, 00:39:35.537 "data_size": 65536 00:39:35.537 } 00:39:35.537 ] 00:39:35.537 }' 00:39:35.537 17:36:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:35.537 17:36:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:35.537 [2024-11-26 17:36:36.062252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:39:35.537 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:35.537 Zero copy mechanism will not be used. 00:39:35.537 Running I/O for 60 seconds... 00:39:35.796 17:36:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:35.796 17:36:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.796 17:36:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:35.796 [2024-11-26 17:36:36.363505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:35.796 17:36:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.796 17:36:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:39:35.796 [2024-11-26 17:36:36.439561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:39:35.796 [2024-11-26 17:36:36.441738] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:36.055 [2024-11-26 17:36:36.558557] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:36.055 [2024-11-26 17:36:36.560060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:36.313 [2024-11-26 17:36:36.784147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:36.313 [2024-11-26 17:36:36.784960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:36.571 131.00 IOPS, 393.00 MiB/s [2024-11-26T17:36:37.266Z] [2024-11-26 17:36:37.234349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:36.830 [2024-11-26 17:36:37.454649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:36.830 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:36.830 "name": "raid_bdev1", 00:39:36.830 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:36.830 "strip_size_kb": 0, 00:39:36.830 "state": "online", 00:39:36.830 "raid_level": "raid1", 00:39:36.830 "superblock": false, 00:39:36.830 "num_base_bdevs": 4, 00:39:36.830 "num_base_bdevs_discovered": 4, 00:39:36.830 "num_base_bdevs_operational": 4, 00:39:36.830 "process": { 00:39:36.830 "type": "rebuild", 00:39:36.830 "target": "spare", 00:39:36.830 "progress": { 00:39:36.830 "blocks": 12288, 00:39:36.830 "percent": 18 00:39:36.830 } 00:39:36.830 }, 00:39:36.830 "base_bdevs_list": [ 00:39:36.830 { 00:39:36.830 "name": "spare", 00:39:36.830 "uuid": "ba87d8fb-aeeb-5fa0-895f-1ff3b95da2fe", 00:39:36.830 "is_configured": true, 00:39:36.830 "data_offset": 0, 00:39:36.830 "data_size": 65536 00:39:36.830 }, 00:39:36.830 { 00:39:36.831 "name": "BaseBdev2", 00:39:36.831 "uuid": "d842e16a-b0e9-55f9-95a2-3d948f01b716", 00:39:36.831 "is_configured": true, 00:39:36.831 "data_offset": 0, 00:39:36.831 "data_size": 65536 00:39:36.831 }, 00:39:36.831 { 00:39:36.831 "name": "BaseBdev3", 00:39:36.831 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:36.831 "is_configured": true, 00:39:36.831 "data_offset": 0, 00:39:36.831 "data_size": 65536 00:39:36.831 }, 00:39:36.831 { 00:39:36.831 "name": "BaseBdev4", 00:39:36.831 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:36.831 "is_configured": true, 00:39:36.831 "data_offset": 0, 00:39:36.831 "data_size": 65536 00:39:36.831 } 00:39:36.831 ] 00:39:36.831 }' 00:39:36.831 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:36.831 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:36.831 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:37.090 [2024-11-26 17:36:37.552050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:37.090 [2024-11-26 17:36:37.581724] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:39:37.090 [2024-11-26 17:36:37.582120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:39:37.090 [2024-11-26 17:36:37.685204] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:37.090 [2024-11-26 17:36:37.694867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:37.090 [2024-11-26 17:36:37.694976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:37.090 [2024-11-26 17:36:37.695009] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:37.090 [2024-11-26 17:36:37.723029] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:37.090 17:36:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.349 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:37.349 "name": "raid_bdev1", 00:39:37.349 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:37.349 "strip_size_kb": 0, 00:39:37.349 "state": "online", 00:39:37.349 "raid_level": "raid1", 00:39:37.349 "superblock": false, 00:39:37.349 "num_base_bdevs": 4, 00:39:37.349 "num_base_bdevs_discovered": 3, 00:39:37.349 "num_base_bdevs_operational": 3, 00:39:37.349 "base_bdevs_list": [ 00:39:37.349 { 00:39:37.349 "name": null, 00:39:37.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:37.349 "is_configured": false, 00:39:37.349 "data_offset": 0, 00:39:37.349 "data_size": 65536 00:39:37.349 }, 00:39:37.349 { 00:39:37.349 "name": "BaseBdev2", 00:39:37.349 "uuid": "d842e16a-b0e9-55f9-95a2-3d948f01b716", 00:39:37.349 "is_configured": true, 00:39:37.349 "data_offset": 0, 00:39:37.349 "data_size": 65536 00:39:37.349 }, 00:39:37.349 { 00:39:37.349 "name": "BaseBdev3", 00:39:37.349 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:37.349 "is_configured": true, 00:39:37.349 "data_offset": 0, 00:39:37.349 "data_size": 65536 00:39:37.349 }, 00:39:37.349 { 00:39:37.349 "name": "BaseBdev4", 00:39:37.349 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:37.349 "is_configured": true, 00:39:37.349 "data_offset": 0, 00:39:37.349 "data_size": 65536 00:39:37.349 } 00:39:37.349 ] 00:39:37.349 }' 00:39:37.349 17:36:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:37.349 17:36:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:37.607 131.00 IOPS, 393.00 MiB/s [2024-11-26T17:36:38.302Z] 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:37.607 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:37.607 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:37.607 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:37.607 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:37.607 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:37.607 17:36:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.607 17:36:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:37.607 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:37.607 17:36:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.607 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:37.607 "name": "raid_bdev1", 00:39:37.607 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:37.607 "strip_size_kb": 0, 00:39:37.607 "state": "online", 00:39:37.607 "raid_level": "raid1", 00:39:37.607 "superblock": false, 00:39:37.607 "num_base_bdevs": 4, 00:39:37.607 "num_base_bdevs_discovered": 3, 00:39:37.607 "num_base_bdevs_operational": 3, 00:39:37.607 "base_bdevs_list": [ 00:39:37.607 { 00:39:37.607 "name": null, 00:39:37.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:37.607 "is_configured": false, 00:39:37.607 "data_offset": 0, 00:39:37.608 "data_size": 65536 00:39:37.608 }, 00:39:37.608 { 00:39:37.608 "name": "BaseBdev2", 00:39:37.608 "uuid": "d842e16a-b0e9-55f9-95a2-3d948f01b716", 00:39:37.608 "is_configured": true, 00:39:37.608 "data_offset": 0, 00:39:37.608 "data_size": 65536 00:39:37.608 }, 00:39:37.608 { 00:39:37.608 "name": "BaseBdev3", 00:39:37.608 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:37.608 "is_configured": true, 00:39:37.608 "data_offset": 0, 00:39:37.608 "data_size": 65536 00:39:37.608 }, 00:39:37.608 { 00:39:37.608 "name": "BaseBdev4", 00:39:37.608 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:37.608 "is_configured": true, 00:39:37.608 "data_offset": 0, 00:39:37.608 "data_size": 65536 00:39:37.608 } 00:39:37.608 ] 00:39:37.608 }' 00:39:37.608 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:37.608 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:37.608 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:37.867 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:37.867 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:37.867 17:36:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.867 17:36:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:37.867 [2024-11-26 17:36:38.348142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:37.867 17:36:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.867 17:36:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:39:37.867 [2024-11-26 17:36:38.418091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:39:37.867 [2024-11-26 17:36:38.420187] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:37.867 [2024-11-26 17:36:38.536076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:37.867 [2024-11-26 17:36:38.536737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:38.125 [2024-11-26 17:36:38.653313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:38.125 [2024-11-26 17:36:38.654104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:38.384 [2024-11-26 17:36:38.990334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:38.640 147.33 IOPS, 442.00 MiB/s [2024-11-26T17:36:39.335Z] [2024-11-26 17:36:39.209035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.898 [2024-11-26 17:36:39.449978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:38.898 "name": "raid_bdev1", 00:39:38.898 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:38.898 "strip_size_kb": 0, 00:39:38.898 "state": "online", 00:39:38.898 "raid_level": "raid1", 00:39:38.898 "superblock": false, 00:39:38.898 "num_base_bdevs": 4, 00:39:38.898 "num_base_bdevs_discovered": 4, 00:39:38.898 "num_base_bdevs_operational": 4, 00:39:38.898 "process": { 00:39:38.898 "type": "rebuild", 00:39:38.898 "target": "spare", 00:39:38.898 "progress": { 00:39:38.898 "blocks": 12288, 00:39:38.898 "percent": 18 00:39:38.898 } 00:39:38.898 }, 00:39:38.898 "base_bdevs_list": [ 00:39:38.898 { 00:39:38.898 "name": "spare", 00:39:38.898 "uuid": "ba87d8fb-aeeb-5fa0-895f-1ff3b95da2fe", 00:39:38.898 "is_configured": true, 00:39:38.898 "data_offset": 0, 00:39:38.898 "data_size": 65536 00:39:38.898 }, 00:39:38.898 { 00:39:38.898 "name": "BaseBdev2", 00:39:38.898 "uuid": "d842e16a-b0e9-55f9-95a2-3d948f01b716", 00:39:38.898 "is_configured": true, 00:39:38.898 "data_offset": 0, 00:39:38.898 "data_size": 65536 00:39:38.898 }, 00:39:38.898 { 00:39:38.898 "name": "BaseBdev3", 00:39:38.898 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:38.898 "is_configured": true, 00:39:38.898 "data_offset": 0, 00:39:38.898 "data_size": 65536 00:39:38.898 }, 00:39:38.898 { 00:39:38.898 "name": "BaseBdev4", 00:39:38.898 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:38.898 "is_configured": true, 00:39:38.898 "data_offset": 0, 00:39:38.898 "data_size": 65536 00:39:38.898 } 00:39:38.898 ] 00:39:38.898 }' 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.898 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:38.898 [2024-11-26 17:36:39.560674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:39.158 [2024-11-26 17:36:39.662182] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:39:39.158 [2024-11-26 17:36:39.770704] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:39:39.158 [2024-11-26 17:36:39.770827] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:39.158 "name": "raid_bdev1", 00:39:39.158 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:39.158 "strip_size_kb": 0, 00:39:39.158 "state": "online", 00:39:39.158 "raid_level": "raid1", 00:39:39.158 "superblock": false, 00:39:39.158 "num_base_bdevs": 4, 00:39:39.158 "num_base_bdevs_discovered": 3, 00:39:39.158 "num_base_bdevs_operational": 3, 00:39:39.158 "process": { 00:39:39.158 "type": "rebuild", 00:39:39.158 "target": "spare", 00:39:39.158 "progress": { 00:39:39.158 "blocks": 16384, 00:39:39.158 "percent": 25 00:39:39.158 } 00:39:39.158 }, 00:39:39.158 "base_bdevs_list": [ 00:39:39.158 { 00:39:39.158 "name": "spare", 00:39:39.158 "uuid": "ba87d8fb-aeeb-5fa0-895f-1ff3b95da2fe", 00:39:39.158 "is_configured": true, 00:39:39.158 "data_offset": 0, 00:39:39.158 "data_size": 65536 00:39:39.158 }, 00:39:39.158 { 00:39:39.158 "name": null, 00:39:39.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:39.158 "is_configured": false, 00:39:39.158 "data_offset": 0, 00:39:39.158 "data_size": 65536 00:39:39.158 }, 00:39:39.158 { 00:39:39.158 "name": "BaseBdev3", 00:39:39.158 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:39.158 "is_configured": true, 00:39:39.158 "data_offset": 0, 00:39:39.158 "data_size": 65536 00:39:39.158 }, 00:39:39.158 { 00:39:39.158 "name": "BaseBdev4", 00:39:39.158 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:39.158 "is_configured": true, 00:39:39.158 "data_offset": 0, 00:39:39.158 "data_size": 65536 00:39:39.158 } 00:39:39.158 ] 00:39:39.158 }' 00:39:39.158 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.416 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:39.416 "name": "raid_bdev1", 00:39:39.417 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:39.417 "strip_size_kb": 0, 00:39:39.417 "state": "online", 00:39:39.417 "raid_level": "raid1", 00:39:39.417 "superblock": false, 00:39:39.417 "num_base_bdevs": 4, 00:39:39.417 "num_base_bdevs_discovered": 3, 00:39:39.417 "num_base_bdevs_operational": 3, 00:39:39.417 "process": { 00:39:39.417 "type": "rebuild", 00:39:39.417 "target": "spare", 00:39:39.417 "progress": { 00:39:39.417 "blocks": 18432, 00:39:39.417 "percent": 28 00:39:39.417 } 00:39:39.417 }, 00:39:39.417 "base_bdevs_list": [ 00:39:39.417 { 00:39:39.417 "name": "spare", 00:39:39.417 "uuid": "ba87d8fb-aeeb-5fa0-895f-1ff3b95da2fe", 00:39:39.417 "is_configured": true, 00:39:39.417 "data_offset": 0, 00:39:39.417 "data_size": 65536 00:39:39.417 }, 00:39:39.417 { 00:39:39.417 "name": null, 00:39:39.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:39.417 "is_configured": false, 00:39:39.417 "data_offset": 0, 00:39:39.417 "data_size": 65536 00:39:39.417 }, 00:39:39.417 { 00:39:39.417 "name": "BaseBdev3", 00:39:39.417 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:39.417 "is_configured": true, 00:39:39.417 "data_offset": 0, 00:39:39.417 "data_size": 65536 00:39:39.417 }, 00:39:39.417 { 00:39:39.417 "name": "BaseBdev4", 00:39:39.417 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:39.417 "is_configured": true, 00:39:39.417 "data_offset": 0, 00:39:39.417 "data_size": 65536 00:39:39.417 } 00:39:39.417 ] 00:39:39.417 }' 00:39:39.417 17:36:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:39.417 [2024-11-26 17:36:40.023352] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:39:39.417 17:36:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:39.417 17:36:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:39.417 132.00 IOPS, 396.00 MiB/s [2024-11-26T17:36:40.112Z] 17:36:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:39.417 17:36:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:39.676 [2024-11-26 17:36:40.262091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:39:39.936 [2024-11-26 17:36:40.504124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:39:39.936 [2024-11-26 17:36:40.505300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:39:40.195 [2024-11-26 17:36:40.722752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:39:40.459 [2024-11-26 17:36:41.026894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:39:40.459 120.00 IOPS, 360.00 MiB/s [2024-11-26T17:36:41.154Z] 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:40.459 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:40.459 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:40.459 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:40.459 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:40.459 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:40.459 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:40.459 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:40.459 17:36:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.459 17:36:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:40.459 17:36:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:40.729 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:40.729 "name": "raid_bdev1", 00:39:40.729 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:40.729 "strip_size_kb": 0, 00:39:40.729 "state": "online", 00:39:40.729 "raid_level": "raid1", 00:39:40.729 "superblock": false, 00:39:40.729 "num_base_bdevs": 4, 00:39:40.729 "num_base_bdevs_discovered": 3, 00:39:40.729 "num_base_bdevs_operational": 3, 00:39:40.729 "process": { 00:39:40.729 "type": "rebuild", 00:39:40.729 "target": "spare", 00:39:40.729 "progress": { 00:39:40.729 "blocks": 34816, 00:39:40.729 "percent": 53 00:39:40.729 } 00:39:40.729 }, 00:39:40.729 "base_bdevs_list": [ 00:39:40.729 { 00:39:40.729 "name": "spare", 00:39:40.729 "uuid": "ba87d8fb-aeeb-5fa0-895f-1ff3b95da2fe", 00:39:40.729 "is_configured": true, 00:39:40.729 "data_offset": 0, 00:39:40.729 "data_size": 65536 00:39:40.729 }, 00:39:40.729 { 00:39:40.729 "name": null, 00:39:40.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:40.729 "is_configured": false, 00:39:40.729 "data_offset": 0, 00:39:40.729 "data_size": 65536 00:39:40.729 }, 00:39:40.729 { 00:39:40.729 "name": "BaseBdev3", 00:39:40.729 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:40.729 "is_configured": true, 00:39:40.729 "data_offset": 0, 00:39:40.729 "data_size": 65536 00:39:40.729 }, 00:39:40.729 { 00:39:40.729 "name": "BaseBdev4", 00:39:40.729 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:40.729 "is_configured": true, 00:39:40.729 "data_offset": 0, 00:39:40.729 "data_size": 65536 00:39:40.729 } 00:39:40.729 ] 00:39:40.729 }' 00:39:40.729 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:40.729 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:40.729 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:40.729 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:40.729 17:36:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:40.729 [2024-11-26 17:36:41.347087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:39:41.297 [2024-11-26 17:36:41.799696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:39:41.555 [2024-11-26 17:36:42.009315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:39:41.555 107.33 IOPS, 322.00 MiB/s [2024-11-26T17:36:42.250Z] [2024-11-26 17:36:42.110962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:39:41.555 [2024-11-26 17:36:42.111415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:39:41.813 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:41.813 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:41.813 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:41.813 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:41.813 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:41.813 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:41.813 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:41.813 17:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:41.814 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:41.814 17:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:41.814 17:36:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.814 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:41.814 "name": "raid_bdev1", 00:39:41.814 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:41.814 "strip_size_kb": 0, 00:39:41.814 "state": "online", 00:39:41.814 "raid_level": "raid1", 00:39:41.814 "superblock": false, 00:39:41.814 "num_base_bdevs": 4, 00:39:41.814 "num_base_bdevs_discovered": 3, 00:39:41.814 "num_base_bdevs_operational": 3, 00:39:41.814 "process": { 00:39:41.814 "type": "rebuild", 00:39:41.814 "target": "spare", 00:39:41.814 "progress": { 00:39:41.814 "blocks": 53248, 00:39:41.814 "percent": 81 00:39:41.814 } 00:39:41.814 }, 00:39:41.814 "base_bdevs_list": [ 00:39:41.814 { 00:39:41.814 "name": "spare", 00:39:41.814 "uuid": "ba87d8fb-aeeb-5fa0-895f-1ff3b95da2fe", 00:39:41.814 "is_configured": true, 00:39:41.814 "data_offset": 0, 00:39:41.814 "data_size": 65536 00:39:41.814 }, 00:39:41.814 { 00:39:41.814 "name": null, 00:39:41.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:41.814 "is_configured": false, 00:39:41.814 "data_offset": 0, 00:39:41.814 "data_size": 65536 00:39:41.814 }, 00:39:41.814 { 00:39:41.814 "name": "BaseBdev3", 00:39:41.814 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:41.814 "is_configured": true, 00:39:41.814 "data_offset": 0, 00:39:41.814 "data_size": 65536 00:39:41.814 }, 00:39:41.814 { 00:39:41.814 "name": "BaseBdev4", 00:39:41.814 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:41.814 "is_configured": true, 00:39:41.814 "data_offset": 0, 00:39:41.814 "data_size": 65536 00:39:41.814 } 00:39:41.814 ] 00:39:41.814 }' 00:39:41.814 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:41.814 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:41.814 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:41.814 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:41.814 17:36:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:42.380 [2024-11-26 17:36:42.880560] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:42.380 [2024-11-26 17:36:42.980363] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:42.380 [2024-11-26 17:36:42.982693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:42.948 97.00 IOPS, 291.00 MiB/s [2024-11-26T17:36:43.643Z] 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:42.948 "name": "raid_bdev1", 00:39:42.948 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:42.948 "strip_size_kb": 0, 00:39:42.948 "state": "online", 00:39:42.948 "raid_level": "raid1", 00:39:42.948 "superblock": false, 00:39:42.948 "num_base_bdevs": 4, 00:39:42.948 "num_base_bdevs_discovered": 3, 00:39:42.948 "num_base_bdevs_operational": 3, 00:39:42.948 "base_bdevs_list": [ 00:39:42.948 { 00:39:42.948 "name": "spare", 00:39:42.948 "uuid": "ba87d8fb-aeeb-5fa0-895f-1ff3b95da2fe", 00:39:42.948 "is_configured": true, 00:39:42.948 "data_offset": 0, 00:39:42.948 "data_size": 65536 00:39:42.948 }, 00:39:42.948 { 00:39:42.948 "name": null, 00:39:42.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:42.948 "is_configured": false, 00:39:42.948 "data_offset": 0, 00:39:42.948 "data_size": 65536 00:39:42.948 }, 00:39:42.948 { 00:39:42.948 "name": "BaseBdev3", 00:39:42.948 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:42.948 "is_configured": true, 00:39:42.948 "data_offset": 0, 00:39:42.948 "data_size": 65536 00:39:42.948 }, 00:39:42.948 { 00:39:42.948 "name": "BaseBdev4", 00:39:42.948 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:42.948 "is_configured": true, 00:39:42.948 "data_offset": 0, 00:39:42.948 "data_size": 65536 00:39:42.948 } 00:39:42.948 ] 00:39:42.948 }' 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:42.948 "name": "raid_bdev1", 00:39:42.948 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:42.948 "strip_size_kb": 0, 00:39:42.948 "state": "online", 00:39:42.948 "raid_level": "raid1", 00:39:42.948 "superblock": false, 00:39:42.948 "num_base_bdevs": 4, 00:39:42.948 "num_base_bdevs_discovered": 3, 00:39:42.948 "num_base_bdevs_operational": 3, 00:39:42.948 "base_bdevs_list": [ 00:39:42.948 { 00:39:42.948 "name": "spare", 00:39:42.948 "uuid": "ba87d8fb-aeeb-5fa0-895f-1ff3b95da2fe", 00:39:42.948 "is_configured": true, 00:39:42.948 "data_offset": 0, 00:39:42.948 "data_size": 65536 00:39:42.948 }, 00:39:42.948 { 00:39:42.948 "name": null, 00:39:42.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:42.948 "is_configured": false, 00:39:42.948 "data_offset": 0, 00:39:42.948 "data_size": 65536 00:39:42.948 }, 00:39:42.948 { 00:39:42.948 "name": "BaseBdev3", 00:39:42.948 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:42.948 "is_configured": true, 00:39:42.948 "data_offset": 0, 00:39:42.948 "data_size": 65536 00:39:42.948 }, 00:39:42.948 { 00:39:42.948 "name": "BaseBdev4", 00:39:42.948 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:42.948 "is_configured": true, 00:39:42.948 "data_offset": 0, 00:39:42.948 "data_size": 65536 00:39:42.948 } 00:39:42.948 ] 00:39:42.948 }' 00:39:42.948 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:43.208 "name": "raid_bdev1", 00:39:43.208 "uuid": "a537fd6d-3e9b-4ded-a188-b6da077910e5", 00:39:43.208 "strip_size_kb": 0, 00:39:43.208 "state": "online", 00:39:43.208 "raid_level": "raid1", 00:39:43.208 "superblock": false, 00:39:43.208 "num_base_bdevs": 4, 00:39:43.208 "num_base_bdevs_discovered": 3, 00:39:43.208 "num_base_bdevs_operational": 3, 00:39:43.208 "base_bdevs_list": [ 00:39:43.208 { 00:39:43.208 "name": "spare", 00:39:43.208 "uuid": "ba87d8fb-aeeb-5fa0-895f-1ff3b95da2fe", 00:39:43.208 "is_configured": true, 00:39:43.208 "data_offset": 0, 00:39:43.208 "data_size": 65536 00:39:43.208 }, 00:39:43.208 { 00:39:43.208 "name": null, 00:39:43.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:43.208 "is_configured": false, 00:39:43.208 "data_offset": 0, 00:39:43.208 "data_size": 65536 00:39:43.208 }, 00:39:43.208 { 00:39:43.208 "name": "BaseBdev3", 00:39:43.208 "uuid": "a4d085ca-bef2-5519-b1ce-aafbd57618fc", 00:39:43.208 "is_configured": true, 00:39:43.208 "data_offset": 0, 00:39:43.208 "data_size": 65536 00:39:43.208 }, 00:39:43.208 { 00:39:43.208 "name": "BaseBdev4", 00:39:43.208 "uuid": "69e806c6-98e8-5364-8a6f-48388cb5109f", 00:39:43.208 "is_configured": true, 00:39:43.208 "data_offset": 0, 00:39:43.208 "data_size": 65536 00:39:43.208 } 00:39:43.208 ] 00:39:43.208 }' 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:43.208 17:36:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:43.467 89.50 IOPS, 268.50 MiB/s [2024-11-26T17:36:44.162Z] 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:43.467 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.468 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:43.468 [2024-11-26 17:36:44.152096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:43.468 [2024-11-26 17:36:44.152183] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:43.727 00:39:43.727 Latency(us) 00:39:43.727 [2024-11-26T17:36:44.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:43.727 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:39:43.727 raid_bdev1 : 8.15 88.57 265.72 0.00 0.00 15971.64 316.59 113557.58 00:39:43.727 [2024-11-26T17:36:44.422Z] =================================================================================================================== 00:39:43.727 [2024-11-26T17:36:44.422Z] Total : 88.57 265.72 0.00 0.00 15971.64 316.59 113557.58 00:39:43.727 [2024-11-26 17:36:44.223181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:43.727 [2024-11-26 17:36:44.223313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:43.727 [2024-11-26 17:36:44.223437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:43.727 [2024-11-26 17:36:44.223488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:43.727 { 00:39:43.727 "results": [ 00:39:43.727 { 00:39:43.727 "job": "raid_bdev1", 00:39:43.727 "core_mask": "0x1", 00:39:43.727 "workload": "randrw", 00:39:43.727 "percentage": 50, 00:39:43.727 "status": "finished", 00:39:43.727 "queue_depth": 2, 00:39:43.727 "io_size": 3145728, 00:39:43.727 "runtime": 8.151408, 00:39:43.727 "iops": 88.5736550053684, 00:39:43.727 "mibps": 265.7209650161052, 00:39:43.727 "io_failed": 0, 00:39:43.727 "io_timeout": 0, 00:39:43.727 "avg_latency_us": 15971.640232735364, 00:39:43.727 "min_latency_us": 316.5903930131004, 00:39:43.727 "max_latency_us": 113557.57554585153 00:39:43.727 } 00:39:43.727 ], 00:39:43.727 "core_count": 1 00:39:43.727 } 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:43.727 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:43.728 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:39:43.986 /dev/nbd0 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:43.986 1+0 records in 00:39:43.986 1+0 records out 00:39:43.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519802 s, 7.9 MB/s 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:43.986 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:39:44.245 /dev/nbd1 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:44.245 1+0 records in 00:39:44.245 1+0 records out 00:39:44.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587125 s, 7.0 MB/s 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:44.245 17:36:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:39:44.504 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:39:44.504 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:44.504 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:39:44.504 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:44.504 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:39:44.504 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:44.504 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:44.762 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:44.762 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:44.762 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:44.762 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:44.762 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:44.762 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:44.762 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:39:44.762 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:44.762 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:44.763 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:39:45.021 /dev/nbd1 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:45.021 1+0 records in 00:39:45.021 1+0 records out 00:39:45.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507443 s, 8.1 MB/s 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:45.021 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:45.280 17:36:45 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79051 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79051 ']' 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79051 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79051 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79051' 00:39:45.539 killing process with pid 79051 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79051 00:39:45.539 Received shutdown signal, test time was about 10.062377 seconds 00:39:45.539 00:39:45.539 Latency(us) 00:39:45.539 [2024-11-26T17:36:46.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:45.539 [2024-11-26T17:36:46.234Z] =================================================================================================================== 00:39:45.539 [2024-11-26T17:36:46.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:45.539 17:36:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79051 00:39:45.539 [2024-11-26 17:36:46.107295] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:46.107 [2024-11-26 17:36:46.537155] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:39:47.487 ************************************ 00:39:47.487 END TEST raid_rebuild_test_io 00:39:47.487 ************************************ 00:39:47.487 00:39:47.487 real 0m13.654s 00:39:47.487 user 0m17.241s 00:39:47.487 sys 0m1.863s 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:39:47.487 17:36:47 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:39:47.487 17:36:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:39:47.487 17:36:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:47.487 17:36:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:47.487 ************************************ 00:39:47.487 START TEST raid_rebuild_test_sb_io 00:39:47.487 ************************************ 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79460 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79460 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79460 ']' 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:47.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:47.487 17:36:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:47.487 [2024-11-26 17:36:47.954151] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:39:47.487 [2024-11-26 17:36:47.954366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:39:47.487 Zero copy mechanism will not be used. 00:39:47.487 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79460 ] 00:39:47.487 [2024-11-26 17:36:48.128275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:47.746 [2024-11-26 17:36:48.261498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:48.005 [2024-11-26 17:36:48.461214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:48.005 [2024-11-26 17:36:48.461365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.271 BaseBdev1_malloc 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.271 [2024-11-26 17:36:48.865942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:48.271 [2024-11-26 17:36:48.866067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:48.271 [2024-11-26 17:36:48.866130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:48.271 [2024-11-26 17:36:48.866167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:48.271 [2024-11-26 17:36:48.868752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:48.271 [2024-11-26 17:36:48.868850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:48.271 BaseBdev1 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.271 BaseBdev2_malloc 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.271 [2024-11-26 17:36:48.921137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:48.271 [2024-11-26 17:36:48.921208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:48.271 [2024-11-26 17:36:48.921232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:48.271 [2024-11-26 17:36:48.921244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:48.271 [2024-11-26 17:36:48.923372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:48.271 [2024-11-26 17:36:48.923411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:48.271 BaseBdev2 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.271 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.539 BaseBdev3_malloc 00:39:48.539 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.539 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:39:48.539 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.539 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.539 [2024-11-26 17:36:48.989157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:39:48.539 [2024-11-26 17:36:48.989217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:48.539 [2024-11-26 17:36:48.989238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:48.539 [2024-11-26 17:36:48.989249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:48.539 [2024-11-26 17:36:48.991419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:48.539 [2024-11-26 17:36:48.991497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:39:48.539 BaseBdev3 00:39:48.539 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.539 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:48.539 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:39:48.539 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.539 17:36:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.539 BaseBdev4_malloc 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.539 [2024-11-26 17:36:49.043235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:39:48.539 [2024-11-26 17:36:49.043358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:48.539 [2024-11-26 17:36:49.043388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:39:48.539 [2024-11-26 17:36:49.043401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:48.539 [2024-11-26 17:36:49.045767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:48.539 [2024-11-26 17:36:49.045818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:39:48.539 BaseBdev4 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.539 spare_malloc 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.539 spare_delay 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.539 [2024-11-26 17:36:49.110760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:48.539 [2024-11-26 17:36:49.110901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:48.539 [2024-11-26 17:36:49.110943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:39:48.539 [2024-11-26 17:36:49.110977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:48.539 [2024-11-26 17:36:49.113321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:48.539 [2024-11-26 17:36:49.113418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:48.539 spare 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.539 [2024-11-26 17:36:49.122866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:48.539 [2024-11-26 17:36:49.125022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:48.539 [2024-11-26 17:36:49.125187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:48.539 [2024-11-26 17:36:49.125300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:48.539 [2024-11-26 17:36:49.125615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:39:48.539 [2024-11-26 17:36:49.125695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:48.539 [2024-11-26 17:36:49.126044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:48.539 [2024-11-26 17:36:49.126290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:39:48.539 [2024-11-26 17:36:49.126336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:39:48.539 [2024-11-26 17:36:49.126587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.539 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:48.539 "name": "raid_bdev1", 00:39:48.539 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:48.539 "strip_size_kb": 0, 00:39:48.539 "state": "online", 00:39:48.539 "raid_level": "raid1", 00:39:48.539 "superblock": true, 00:39:48.539 "num_base_bdevs": 4, 00:39:48.540 "num_base_bdevs_discovered": 4, 00:39:48.540 "num_base_bdevs_operational": 4, 00:39:48.540 "base_bdevs_list": [ 00:39:48.540 { 00:39:48.540 "name": "BaseBdev1", 00:39:48.540 "uuid": "b487f191-44b7-5d44-8ab8-5b5b5502846e", 00:39:48.540 "is_configured": true, 00:39:48.540 "data_offset": 2048, 00:39:48.540 "data_size": 63488 00:39:48.540 }, 00:39:48.540 { 00:39:48.540 "name": "BaseBdev2", 00:39:48.540 "uuid": "6475cbc7-401e-5cd0-a1f3-3e010bb49c6d", 00:39:48.540 "is_configured": true, 00:39:48.540 "data_offset": 2048, 00:39:48.540 "data_size": 63488 00:39:48.540 }, 00:39:48.540 { 00:39:48.540 "name": "BaseBdev3", 00:39:48.540 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:48.540 "is_configured": true, 00:39:48.540 "data_offset": 2048, 00:39:48.540 "data_size": 63488 00:39:48.540 }, 00:39:48.540 { 00:39:48.540 "name": "BaseBdev4", 00:39:48.540 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:48.540 "is_configured": true, 00:39:48.540 "data_offset": 2048, 00:39:48.540 "data_size": 63488 00:39:48.540 } 00:39:48.540 ] 00:39:48.540 }' 00:39:48.540 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:48.540 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:49.108 [2024-11-26 17:36:49.606314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:49.108 [2024-11-26 17:36:49.697796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:49.108 "name": "raid_bdev1", 00:39:49.108 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:49.108 "strip_size_kb": 0, 00:39:49.108 "state": "online", 00:39:49.108 "raid_level": "raid1", 00:39:49.108 "superblock": true, 00:39:49.108 "num_base_bdevs": 4, 00:39:49.108 "num_base_bdevs_discovered": 3, 00:39:49.108 "num_base_bdevs_operational": 3, 00:39:49.108 "base_bdevs_list": [ 00:39:49.108 { 00:39:49.108 "name": null, 00:39:49.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:49.108 "is_configured": false, 00:39:49.108 "data_offset": 0, 00:39:49.108 "data_size": 63488 00:39:49.108 }, 00:39:49.108 { 00:39:49.108 "name": "BaseBdev2", 00:39:49.108 "uuid": "6475cbc7-401e-5cd0-a1f3-3e010bb49c6d", 00:39:49.108 "is_configured": true, 00:39:49.108 "data_offset": 2048, 00:39:49.108 "data_size": 63488 00:39:49.108 }, 00:39:49.108 { 00:39:49.108 "name": "BaseBdev3", 00:39:49.108 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:49.108 "is_configured": true, 00:39:49.108 "data_offset": 2048, 00:39:49.108 "data_size": 63488 00:39:49.108 }, 00:39:49.108 { 00:39:49.108 "name": "BaseBdev4", 00:39:49.108 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:49.108 "is_configured": true, 00:39:49.108 "data_offset": 2048, 00:39:49.108 "data_size": 63488 00:39:49.108 } 00:39:49.108 ] 00:39:49.108 }' 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:49.108 17:36:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:49.367 [2024-11-26 17:36:49.802216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:39:49.367 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:49.367 Zero copy mechanism will not be used. 00:39:49.367 Running I/O for 60 seconds... 00:39:49.626 17:36:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:49.626 17:36:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.626 17:36:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:49.626 [2024-11-26 17:36:50.124361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:49.626 17:36:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.626 17:36:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:39:49.626 [2024-11-26 17:36:50.184953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:39:49.626 [2024-11-26 17:36:50.187077] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:49.884 [2024-11-26 17:36:50.320817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:49.884 [2024-11-26 17:36:50.322318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:49.884 [2024-11-26 17:36:50.550231] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:49.884 [2024-11-26 17:36:50.550689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:50.402 131.00 IOPS, 393.00 MiB/s [2024-11-26T17:36:51.097Z] [2024-11-26 17:36:50.866886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:50.402 [2024-11-26 17:36:50.868375] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:50.402 [2024-11-26 17:36:51.095806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.662 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:50.662 "name": "raid_bdev1", 00:39:50.662 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:50.662 "strip_size_kb": 0, 00:39:50.662 "state": "online", 00:39:50.662 "raid_level": "raid1", 00:39:50.662 "superblock": true, 00:39:50.662 "num_base_bdevs": 4, 00:39:50.662 "num_base_bdevs_discovered": 4, 00:39:50.662 "num_base_bdevs_operational": 4, 00:39:50.662 "process": { 00:39:50.662 "type": "rebuild", 00:39:50.662 "target": "spare", 00:39:50.662 "progress": { 00:39:50.662 "blocks": 10240, 00:39:50.662 "percent": 16 00:39:50.662 } 00:39:50.662 }, 00:39:50.662 "base_bdevs_list": [ 00:39:50.662 { 00:39:50.662 "name": "spare", 00:39:50.662 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:50.662 "is_configured": true, 00:39:50.662 "data_offset": 2048, 00:39:50.662 "data_size": 63488 00:39:50.662 }, 00:39:50.662 { 00:39:50.662 "name": "BaseBdev2", 00:39:50.662 "uuid": "6475cbc7-401e-5cd0-a1f3-3e010bb49c6d", 00:39:50.662 "is_configured": true, 00:39:50.662 "data_offset": 2048, 00:39:50.662 "data_size": 63488 00:39:50.662 }, 00:39:50.662 { 00:39:50.662 "name": "BaseBdev3", 00:39:50.662 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:50.662 "is_configured": true, 00:39:50.662 "data_offset": 2048, 00:39:50.662 "data_size": 63488 00:39:50.662 }, 00:39:50.662 { 00:39:50.662 "name": "BaseBdev4", 00:39:50.662 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:50.662 "is_configured": true, 00:39:50.663 "data_offset": 2048, 00:39:50.663 "data_size": 63488 00:39:50.663 } 00:39:50.663 ] 00:39:50.663 }' 00:39:50.663 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:50.663 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:50.663 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:50.663 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:50.663 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:50.663 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.663 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:50.663 [2024-11-26 17:36:51.301476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:50.663 [2024-11-26 17:36:51.317304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:50.663 [2024-11-26 17:36:51.317898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:50.663 [2024-11-26 17:36:51.318970] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:50.663 [2024-11-26 17:36:51.329008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:50.663 [2024-11-26 17:36:51.329061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:50.663 [2024-11-26 17:36:51.329075] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:50.923 [2024-11-26 17:36:51.374350] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:50.923 "name": "raid_bdev1", 00:39:50.923 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:50.923 "strip_size_kb": 0, 00:39:50.923 "state": "online", 00:39:50.923 "raid_level": "raid1", 00:39:50.923 "superblock": true, 00:39:50.923 "num_base_bdevs": 4, 00:39:50.923 "num_base_bdevs_discovered": 3, 00:39:50.923 "num_base_bdevs_operational": 3, 00:39:50.923 "base_bdevs_list": [ 00:39:50.923 { 00:39:50.923 "name": null, 00:39:50.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:50.923 "is_configured": false, 00:39:50.923 "data_offset": 0, 00:39:50.923 "data_size": 63488 00:39:50.923 }, 00:39:50.923 { 00:39:50.923 "name": "BaseBdev2", 00:39:50.923 "uuid": "6475cbc7-401e-5cd0-a1f3-3e010bb49c6d", 00:39:50.923 "is_configured": true, 00:39:50.923 "data_offset": 2048, 00:39:50.923 "data_size": 63488 00:39:50.923 }, 00:39:50.923 { 00:39:50.923 "name": "BaseBdev3", 00:39:50.923 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:50.923 "is_configured": true, 00:39:50.923 "data_offset": 2048, 00:39:50.923 "data_size": 63488 00:39:50.923 }, 00:39:50.923 { 00:39:50.923 "name": "BaseBdev4", 00:39:50.923 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:50.923 "is_configured": true, 00:39:50.923 "data_offset": 2048, 00:39:50.923 "data_size": 63488 00:39:50.923 } 00:39:50.923 ] 00:39:50.923 }' 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:50.923 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:51.183 131.00 IOPS, 393.00 MiB/s [2024-11-26T17:36:51.878Z] 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:51.183 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:51.183 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:51.183 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:51.183 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:51.183 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:51.183 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:51.183 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.183 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:51.442 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.442 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:51.442 "name": "raid_bdev1", 00:39:51.442 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:51.442 "strip_size_kb": 0, 00:39:51.442 "state": "online", 00:39:51.442 "raid_level": "raid1", 00:39:51.442 "superblock": true, 00:39:51.442 "num_base_bdevs": 4, 00:39:51.442 "num_base_bdevs_discovered": 3, 00:39:51.442 "num_base_bdevs_operational": 3, 00:39:51.442 "base_bdevs_list": [ 00:39:51.442 { 00:39:51.442 "name": null, 00:39:51.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:51.442 "is_configured": false, 00:39:51.442 "data_offset": 0, 00:39:51.442 "data_size": 63488 00:39:51.442 }, 00:39:51.442 { 00:39:51.442 "name": "BaseBdev2", 00:39:51.442 "uuid": "6475cbc7-401e-5cd0-a1f3-3e010bb49c6d", 00:39:51.442 "is_configured": true, 00:39:51.442 "data_offset": 2048, 00:39:51.442 "data_size": 63488 00:39:51.442 }, 00:39:51.442 { 00:39:51.442 "name": "BaseBdev3", 00:39:51.442 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:51.442 "is_configured": true, 00:39:51.442 "data_offset": 2048, 00:39:51.442 "data_size": 63488 00:39:51.442 }, 00:39:51.442 { 00:39:51.442 "name": "BaseBdev4", 00:39:51.442 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:51.442 "is_configured": true, 00:39:51.442 "data_offset": 2048, 00:39:51.442 "data_size": 63488 00:39:51.442 } 00:39:51.442 ] 00:39:51.442 }' 00:39:51.442 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:51.442 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:51.442 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:51.442 17:36:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:51.442 17:36:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:51.442 17:36:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.442 17:36:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:51.442 [2024-11-26 17:36:52.010713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:51.442 17:36:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.442 17:36:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:39:51.442 [2024-11-26 17:36:52.074104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:39:51.442 [2024-11-26 17:36:52.076081] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:51.701 [2024-11-26 17:36:52.191256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:51.701 [2024-11-26 17:36:52.192821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:39:51.960 [2024-11-26 17:36:52.415561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:51.960 [2024-11-26 17:36:52.416463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:39:52.219 [2024-11-26 17:36:52.739451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:39:52.219 128.33 IOPS, 385.00 MiB/s [2024-11-26T17:36:52.914Z] [2024-11-26 17:36:52.869125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:52.219 [2024-11-26 17:36:52.870061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.478 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:52.478 "name": "raid_bdev1", 00:39:52.478 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:52.478 "strip_size_kb": 0, 00:39:52.478 "state": "online", 00:39:52.478 "raid_level": "raid1", 00:39:52.478 "superblock": true, 00:39:52.478 "num_base_bdevs": 4, 00:39:52.478 "num_base_bdevs_discovered": 4, 00:39:52.478 "num_base_bdevs_operational": 4, 00:39:52.478 "process": { 00:39:52.478 "type": "rebuild", 00:39:52.478 "target": "spare", 00:39:52.478 "progress": { 00:39:52.478 "blocks": 10240, 00:39:52.478 "percent": 16 00:39:52.478 } 00:39:52.478 }, 00:39:52.478 "base_bdevs_list": [ 00:39:52.478 { 00:39:52.478 "name": "spare", 00:39:52.478 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:52.478 "is_configured": true, 00:39:52.478 "data_offset": 2048, 00:39:52.478 "data_size": 63488 00:39:52.478 }, 00:39:52.478 { 00:39:52.478 "name": "BaseBdev2", 00:39:52.479 "uuid": "6475cbc7-401e-5cd0-a1f3-3e010bb49c6d", 00:39:52.479 "is_configured": true, 00:39:52.479 "data_offset": 2048, 00:39:52.479 "data_size": 63488 00:39:52.479 }, 00:39:52.479 { 00:39:52.479 "name": "BaseBdev3", 00:39:52.479 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:52.479 "is_configured": true, 00:39:52.479 "data_offset": 2048, 00:39:52.479 "data_size": 63488 00:39:52.479 }, 00:39:52.479 { 00:39:52.479 "name": "BaseBdev4", 00:39:52.479 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:52.479 "is_configured": true, 00:39:52.479 "data_offset": 2048, 00:39:52.479 "data_size": 63488 00:39:52.479 } 00:39:52.479 ] 00:39:52.479 }' 00:39:52.479 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:52.479 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:52.479 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:52.737 [2024-11-26 17:36:53.201271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:39:52.737 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:52.737 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:39:52.737 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:39:52.737 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:39:52.737 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:39:52.737 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:39:52.738 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:39:52.738 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:39:52.738 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.738 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:52.738 [2024-11-26 17:36:53.220254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:52.738 [2024-11-26 17:36:53.321177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:39:52.996 [2024-11-26 17:36:53.450832] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:39:52.996 [2024-11-26 17:36:53.450945] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:39:52.996 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.996 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:39:52.996 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:39:52.996 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:52.996 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:52.996 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:52.996 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:52.996 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:52.996 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:52.996 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:52.997 "name": "raid_bdev1", 00:39:52.997 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:52.997 "strip_size_kb": 0, 00:39:52.997 "state": "online", 00:39:52.997 "raid_level": "raid1", 00:39:52.997 "superblock": true, 00:39:52.997 "num_base_bdevs": 4, 00:39:52.997 "num_base_bdevs_discovered": 3, 00:39:52.997 "num_base_bdevs_operational": 3, 00:39:52.997 "process": { 00:39:52.997 "type": "rebuild", 00:39:52.997 "target": "spare", 00:39:52.997 "progress": { 00:39:52.997 "blocks": 16384, 00:39:52.997 "percent": 25 00:39:52.997 } 00:39:52.997 }, 00:39:52.997 "base_bdevs_list": [ 00:39:52.997 { 00:39:52.997 "name": "spare", 00:39:52.997 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:52.997 "is_configured": true, 00:39:52.997 "data_offset": 2048, 00:39:52.997 "data_size": 63488 00:39:52.997 }, 00:39:52.997 { 00:39:52.997 "name": null, 00:39:52.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:52.997 "is_configured": false, 00:39:52.997 "data_offset": 0, 00:39:52.997 "data_size": 63488 00:39:52.997 }, 00:39:52.997 { 00:39:52.997 "name": "BaseBdev3", 00:39:52.997 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:52.997 "is_configured": true, 00:39:52.997 "data_offset": 2048, 00:39:52.997 "data_size": 63488 00:39:52.997 }, 00:39:52.997 { 00:39:52.997 "name": "BaseBdev4", 00:39:52.997 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:52.997 "is_configured": true, 00:39:52.997 "data_offset": 2048, 00:39:52.997 "data_size": 63488 00:39:52.997 } 00:39:52.997 ] 00:39:52.997 }' 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=508 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:52.997 "name": "raid_bdev1", 00:39:52.997 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:52.997 "strip_size_kb": 0, 00:39:52.997 "state": "online", 00:39:52.997 "raid_level": "raid1", 00:39:52.997 "superblock": true, 00:39:52.997 "num_base_bdevs": 4, 00:39:52.997 "num_base_bdevs_discovered": 3, 00:39:52.997 "num_base_bdevs_operational": 3, 00:39:52.997 "process": { 00:39:52.997 "type": "rebuild", 00:39:52.997 "target": "spare", 00:39:52.997 "progress": { 00:39:52.997 "blocks": 18432, 00:39:52.997 "percent": 29 00:39:52.997 } 00:39:52.997 }, 00:39:52.997 "base_bdevs_list": [ 00:39:52.997 { 00:39:52.997 "name": "spare", 00:39:52.997 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:52.997 "is_configured": true, 00:39:52.997 "data_offset": 2048, 00:39:52.997 "data_size": 63488 00:39:52.997 }, 00:39:52.997 { 00:39:52.997 "name": null, 00:39:52.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:52.997 "is_configured": false, 00:39:52.997 "data_offset": 0, 00:39:52.997 "data_size": 63488 00:39:52.997 }, 00:39:52.997 { 00:39:52.997 "name": "BaseBdev3", 00:39:52.997 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:52.997 "is_configured": true, 00:39:52.997 "data_offset": 2048, 00:39:52.997 "data_size": 63488 00:39:52.997 }, 00:39:52.997 { 00:39:52.997 "name": "BaseBdev4", 00:39:52.997 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:52.997 "is_configured": true, 00:39:52.997 "data_offset": 2048, 00:39:52.997 "data_size": 63488 00:39:52.997 } 00:39:52.997 ] 00:39:52.997 }' 00:39:52.997 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:53.256 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:53.256 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:53.256 [2024-11-26 17:36:53.719758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:39:53.256 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:53.256 17:36:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:53.256 107.50 IOPS, 322.50 MiB/s [2024-11-26T17:36:53.951Z] [2024-11-26 17:36:53.839037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:39:53.823 [2024-11-26 17:36:54.275232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:39:54.082 [2024-11-26 17:36:54.639974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:39:54.082 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:54.082 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:54.082 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:54.082 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:54.082 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:54.082 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:54.082 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:54.082 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:54.082 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.082 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:54.342 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.342 99.40 IOPS, 298.20 MiB/s [2024-11-26T17:36:55.037Z] 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:54.342 "name": "raid_bdev1", 00:39:54.342 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:54.342 "strip_size_kb": 0, 00:39:54.342 "state": "online", 00:39:54.342 "raid_level": "raid1", 00:39:54.342 "superblock": true, 00:39:54.342 "num_base_bdevs": 4, 00:39:54.342 "num_base_bdevs_discovered": 3, 00:39:54.342 "num_base_bdevs_operational": 3, 00:39:54.342 "process": { 00:39:54.342 "type": "rebuild", 00:39:54.342 "target": "spare", 00:39:54.342 "progress": { 00:39:54.342 "blocks": 34816, 00:39:54.342 "percent": 54 00:39:54.342 } 00:39:54.342 }, 00:39:54.342 "base_bdevs_list": [ 00:39:54.342 { 00:39:54.342 "name": "spare", 00:39:54.342 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:54.342 "is_configured": true, 00:39:54.342 "data_offset": 2048, 00:39:54.342 "data_size": 63488 00:39:54.342 }, 00:39:54.342 { 00:39:54.342 "name": null, 00:39:54.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:54.342 "is_configured": false, 00:39:54.342 "data_offset": 0, 00:39:54.342 "data_size": 63488 00:39:54.342 }, 00:39:54.342 { 00:39:54.342 "name": "BaseBdev3", 00:39:54.342 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:54.342 "is_configured": true, 00:39:54.342 "data_offset": 2048, 00:39:54.342 "data_size": 63488 00:39:54.342 }, 00:39:54.342 { 00:39:54.342 "name": "BaseBdev4", 00:39:54.342 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:54.342 "is_configured": true, 00:39:54.342 "data_offset": 2048, 00:39:54.342 "data_size": 63488 00:39:54.342 } 00:39:54.342 ] 00:39:54.342 }' 00:39:54.342 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:54.342 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:54.342 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:54.342 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:54.342 17:36:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:54.342 [2024-11-26 17:36:54.983147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:39:54.627 [2024-11-26 17:36:55.099315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:39:55.209 [2024-11-26 17:36:55.661941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:39:55.209 [2024-11-26 17:36:55.662586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:39:55.209 90.33 IOPS, 271.00 MiB/s [2024-11-26T17:36:55.904Z] [2024-11-26 17:36:55.864966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.469 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:55.469 "name": "raid_bdev1", 00:39:55.470 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:55.470 "strip_size_kb": 0, 00:39:55.470 "state": "online", 00:39:55.470 "raid_level": "raid1", 00:39:55.470 "superblock": true, 00:39:55.470 "num_base_bdevs": 4, 00:39:55.470 "num_base_bdevs_discovered": 3, 00:39:55.470 "num_base_bdevs_operational": 3, 00:39:55.470 "process": { 00:39:55.470 "type": "rebuild", 00:39:55.470 "target": "spare", 00:39:55.470 "progress": { 00:39:55.470 "blocks": 53248, 00:39:55.470 "percent": 83 00:39:55.470 } 00:39:55.470 }, 00:39:55.470 "base_bdevs_list": [ 00:39:55.470 { 00:39:55.470 "name": "spare", 00:39:55.470 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:55.470 "is_configured": true, 00:39:55.470 "data_offset": 2048, 00:39:55.470 "data_size": 63488 00:39:55.470 }, 00:39:55.470 { 00:39:55.470 "name": null, 00:39:55.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:55.470 "is_configured": false, 00:39:55.470 "data_offset": 0, 00:39:55.470 "data_size": 63488 00:39:55.470 }, 00:39:55.470 { 00:39:55.470 "name": "BaseBdev3", 00:39:55.470 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:55.470 "is_configured": true, 00:39:55.470 "data_offset": 2048, 00:39:55.470 "data_size": 63488 00:39:55.470 }, 00:39:55.470 { 00:39:55.470 "name": "BaseBdev4", 00:39:55.470 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:55.470 "is_configured": true, 00:39:55.470 "data_offset": 2048, 00:39:55.470 "data_size": 63488 00:39:55.470 } 00:39:55.470 ] 00:39:55.470 }' 00:39:55.470 17:36:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:55.470 17:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:55.470 17:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:55.470 17:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:55.470 17:36:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:55.470 [2024-11-26 17:36:56.083881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:39:55.729 [2024-11-26 17:36:56.317085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:39:55.988 [2024-11-26 17:36:56.641670] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:56.248 [2024-11-26 17:36:56.741474] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:56.248 [2024-11-26 17:36:56.744760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:56.508 81.71 IOPS, 245.14 MiB/s [2024-11-26T17:36:57.203Z] 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:56.508 "name": "raid_bdev1", 00:39:56.508 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:56.508 "strip_size_kb": 0, 00:39:56.508 "state": "online", 00:39:56.508 "raid_level": "raid1", 00:39:56.508 "superblock": true, 00:39:56.508 "num_base_bdevs": 4, 00:39:56.508 "num_base_bdevs_discovered": 3, 00:39:56.508 "num_base_bdevs_operational": 3, 00:39:56.508 "base_bdevs_list": [ 00:39:56.508 { 00:39:56.508 "name": "spare", 00:39:56.508 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:56.508 "is_configured": true, 00:39:56.508 "data_offset": 2048, 00:39:56.508 "data_size": 63488 00:39:56.508 }, 00:39:56.508 { 00:39:56.508 "name": null, 00:39:56.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:56.508 "is_configured": false, 00:39:56.508 "data_offset": 0, 00:39:56.508 "data_size": 63488 00:39:56.508 }, 00:39:56.508 { 00:39:56.508 "name": "BaseBdev3", 00:39:56.508 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:56.508 "is_configured": true, 00:39:56.508 "data_offset": 2048, 00:39:56.508 "data_size": 63488 00:39:56.508 }, 00:39:56.508 { 00:39:56.508 "name": "BaseBdev4", 00:39:56.508 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:56.508 "is_configured": true, 00:39:56.508 "data_offset": 2048, 00:39:56.508 "data_size": 63488 00:39:56.508 } 00:39:56.508 ] 00:39:56.508 }' 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:56.508 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:56.768 "name": "raid_bdev1", 00:39:56.768 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:56.768 "strip_size_kb": 0, 00:39:56.768 "state": "online", 00:39:56.768 "raid_level": "raid1", 00:39:56.768 "superblock": true, 00:39:56.768 "num_base_bdevs": 4, 00:39:56.768 "num_base_bdevs_discovered": 3, 00:39:56.768 "num_base_bdevs_operational": 3, 00:39:56.768 "base_bdevs_list": [ 00:39:56.768 { 00:39:56.768 "name": "spare", 00:39:56.768 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:56.768 "is_configured": true, 00:39:56.768 "data_offset": 2048, 00:39:56.768 "data_size": 63488 00:39:56.768 }, 00:39:56.768 { 00:39:56.768 "name": null, 00:39:56.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:56.768 "is_configured": false, 00:39:56.768 "data_offset": 0, 00:39:56.768 "data_size": 63488 00:39:56.768 }, 00:39:56.768 { 00:39:56.768 "name": "BaseBdev3", 00:39:56.768 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:56.768 "is_configured": true, 00:39:56.768 "data_offset": 2048, 00:39:56.768 "data_size": 63488 00:39:56.768 }, 00:39:56.768 { 00:39:56.768 "name": "BaseBdev4", 00:39:56.768 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:56.768 "is_configured": true, 00:39:56.768 "data_offset": 2048, 00:39:56.768 "data_size": 63488 00:39:56.768 } 00:39:56.768 ] 00:39:56.768 }' 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:56.768 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:56.769 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:56.769 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.769 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:56.769 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.769 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:56.769 "name": "raid_bdev1", 00:39:56.769 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:56.769 "strip_size_kb": 0, 00:39:56.769 "state": "online", 00:39:56.769 "raid_level": "raid1", 00:39:56.769 "superblock": true, 00:39:56.769 "num_base_bdevs": 4, 00:39:56.769 "num_base_bdevs_discovered": 3, 00:39:56.769 "num_base_bdevs_operational": 3, 00:39:56.769 "base_bdevs_list": [ 00:39:56.769 { 00:39:56.769 "name": "spare", 00:39:56.769 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:56.769 "is_configured": true, 00:39:56.769 "data_offset": 2048, 00:39:56.769 "data_size": 63488 00:39:56.769 }, 00:39:56.769 { 00:39:56.769 "name": null, 00:39:56.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:56.769 "is_configured": false, 00:39:56.769 "data_offset": 0, 00:39:56.769 "data_size": 63488 00:39:56.769 }, 00:39:56.769 { 00:39:56.769 "name": "BaseBdev3", 00:39:56.769 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:56.769 "is_configured": true, 00:39:56.769 "data_offset": 2048, 00:39:56.769 "data_size": 63488 00:39:56.769 }, 00:39:56.769 { 00:39:56.769 "name": "BaseBdev4", 00:39:56.769 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:56.769 "is_configured": true, 00:39:56.769 "data_offset": 2048, 00:39:56.769 "data_size": 63488 00:39:56.769 } 00:39:56.769 ] 00:39:56.769 }' 00:39:56.769 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:56.769 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:57.339 77.50 IOPS, 232.50 MiB/s [2024-11-26T17:36:58.034Z] 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:57.339 [2024-11-26 17:36:57.823055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:57.339 [2024-11-26 17:36:57.823134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:57.339 00:39:57.339 Latency(us) 00:39:57.339 [2024-11-26T17:36:58.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.339 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:39:57.339 raid_bdev1 : 8.10 76.77 230.32 0.00 0.00 18341.48 343.42 130041.74 00:39:57.339 [2024-11-26T17:36:58.034Z] =================================================================================================================== 00:39:57.339 [2024-11-26T17:36:58.034Z] Total : 76.77 230.32 0.00 0.00 18341.48 343.42 130041.74 00:39:57.339 [2024-11-26 17:36:57.915300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:57.339 [2024-11-26 17:36:57.915446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:57.339 [2024-11-26 17:36:57.915596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:57.339 [2024-11-26 17:36:57.915648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:39:57.339 { 00:39:57.339 "results": [ 00:39:57.339 { 00:39:57.339 "job": "raid_bdev1", 00:39:57.339 "core_mask": "0x1", 00:39:57.339 "workload": "randrw", 00:39:57.339 "percentage": 50, 00:39:57.339 "status": "finished", 00:39:57.339 "queue_depth": 2, 00:39:57.339 "io_size": 3145728, 00:39:57.339 "runtime": 8.101738, 00:39:57.339 "iops": 76.77365029577604, 00:39:57.339 "mibps": 230.3209508873281, 00:39:57.339 "io_failed": 0, 00:39:57.339 "io_timeout": 0, 00:39:57.339 "avg_latency_us": 18341.48357039554, 00:39:57.339 "min_latency_us": 343.42008733624453, 00:39:57.339 "max_latency_us": 130041.73973799127 00:39:57.339 } 00:39:57.339 ], 00:39:57.339 "core_count": 1 00:39:57.339 } 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:57.339 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:39:57.340 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:57.340 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:57.340 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:57.340 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:39:57.340 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:57.340 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:57.340 17:36:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:39:57.600 /dev/nbd0 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:57.600 1+0 records in 00:39:57.600 1+0 records out 00:39:57.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486319 s, 8.4 MB/s 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:57.600 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:39:57.859 /dev/nbd1 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:57.859 1+0 records in 00:39:57.859 1+0 records out 00:39:57.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440779 s, 9.3 MB/s 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:57.859 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:58.118 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:39:58.118 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:58.118 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:39:58.118 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:58.118 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:39:58.118 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:58.118 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:58.377 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:58.377 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:58.377 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:58.377 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:58.377 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:58.377 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:58.377 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:58.378 17:36:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:39:58.637 /dev/nbd1 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:58.637 1+0 records in 00:39:58.637 1+0 records out 00:39:58.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516012 s, 7.9 MB/s 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:58.637 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:58.896 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:59.155 [2024-11-26 17:36:59.786134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:59.155 [2024-11-26 17:36:59.786243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:59.155 [2024-11-26 17:36:59.786285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:39:59.155 [2024-11-26 17:36:59.786314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:59.155 [2024-11-26 17:36:59.788664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:59.155 [2024-11-26 17:36:59.788741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:59.155 [2024-11-26 17:36:59.788861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:59.155 [2024-11-26 17:36:59.788936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:59.155 [2024-11-26 17:36:59.789119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:59.155 [2024-11-26 17:36:59.789254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:59.155 spare 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.155 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:59.414 [2024-11-26 17:36:59.889205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:39:59.414 [2024-11-26 17:36:59.889260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:59.414 [2024-11-26 17:36:59.889670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:39:59.414 [2024-11-26 17:36:59.889908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:39:59.414 [2024-11-26 17:36:59.889922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:39:59.414 [2024-11-26 17:36:59.890140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:59.414 "name": "raid_bdev1", 00:39:59.414 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:59.414 "strip_size_kb": 0, 00:39:59.414 "state": "online", 00:39:59.414 "raid_level": "raid1", 00:39:59.414 "superblock": true, 00:39:59.414 "num_base_bdevs": 4, 00:39:59.414 "num_base_bdevs_discovered": 3, 00:39:59.414 "num_base_bdevs_operational": 3, 00:39:59.414 "base_bdevs_list": [ 00:39:59.414 { 00:39:59.414 "name": "spare", 00:39:59.414 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:59.414 "is_configured": true, 00:39:59.414 "data_offset": 2048, 00:39:59.414 "data_size": 63488 00:39:59.414 }, 00:39:59.414 { 00:39:59.414 "name": null, 00:39:59.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:59.414 "is_configured": false, 00:39:59.414 "data_offset": 2048, 00:39:59.414 "data_size": 63488 00:39:59.414 }, 00:39:59.414 { 00:39:59.414 "name": "BaseBdev3", 00:39:59.414 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:59.414 "is_configured": true, 00:39:59.414 "data_offset": 2048, 00:39:59.414 "data_size": 63488 00:39:59.414 }, 00:39:59.414 { 00:39:59.414 "name": "BaseBdev4", 00:39:59.414 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:59.414 "is_configured": true, 00:39:59.414 "data_offset": 2048, 00:39:59.414 "data_size": 63488 00:39:59.414 } 00:39:59.414 ] 00:39:59.414 }' 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:59.414 17:36:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:59.984 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:59.984 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:59.985 "name": "raid_bdev1", 00:39:59.985 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:59.985 "strip_size_kb": 0, 00:39:59.985 "state": "online", 00:39:59.985 "raid_level": "raid1", 00:39:59.985 "superblock": true, 00:39:59.985 "num_base_bdevs": 4, 00:39:59.985 "num_base_bdevs_discovered": 3, 00:39:59.985 "num_base_bdevs_operational": 3, 00:39:59.985 "base_bdevs_list": [ 00:39:59.985 { 00:39:59.985 "name": "spare", 00:39:59.985 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:39:59.985 "is_configured": true, 00:39:59.985 "data_offset": 2048, 00:39:59.985 "data_size": 63488 00:39:59.985 }, 00:39:59.985 { 00:39:59.985 "name": null, 00:39:59.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:59.985 "is_configured": false, 00:39:59.985 "data_offset": 2048, 00:39:59.985 "data_size": 63488 00:39:59.985 }, 00:39:59.985 { 00:39:59.985 "name": "BaseBdev3", 00:39:59.985 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:59.985 "is_configured": true, 00:39:59.985 "data_offset": 2048, 00:39:59.985 "data_size": 63488 00:39:59.985 }, 00:39:59.985 { 00:39:59.985 "name": "BaseBdev4", 00:39:59.985 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:59.985 "is_configured": true, 00:39:59.985 "data_offset": 2048, 00:39:59.985 "data_size": 63488 00:39:59.985 } 00:39:59.985 ] 00:39:59.985 }' 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:59.985 [2024-11-26 17:37:00.577074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:59.985 "name": "raid_bdev1", 00:39:59.985 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:39:59.985 "strip_size_kb": 0, 00:39:59.985 "state": "online", 00:39:59.985 "raid_level": "raid1", 00:39:59.985 "superblock": true, 00:39:59.985 "num_base_bdevs": 4, 00:39:59.985 "num_base_bdevs_discovered": 2, 00:39:59.985 "num_base_bdevs_operational": 2, 00:39:59.985 "base_bdevs_list": [ 00:39:59.985 { 00:39:59.985 "name": null, 00:39:59.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:59.985 "is_configured": false, 00:39:59.985 "data_offset": 0, 00:39:59.985 "data_size": 63488 00:39:59.985 }, 00:39:59.985 { 00:39:59.985 "name": null, 00:39:59.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:59.985 "is_configured": false, 00:39:59.985 "data_offset": 2048, 00:39:59.985 "data_size": 63488 00:39:59.985 }, 00:39:59.985 { 00:39:59.985 "name": "BaseBdev3", 00:39:59.985 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:39:59.985 "is_configured": true, 00:39:59.985 "data_offset": 2048, 00:39:59.985 "data_size": 63488 00:39:59.985 }, 00:39:59.985 { 00:39:59.985 "name": "BaseBdev4", 00:39:59.985 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:39:59.985 "is_configured": true, 00:39:59.985 "data_offset": 2048, 00:39:59.985 "data_size": 63488 00:39:59.985 } 00:39:59.985 ] 00:39:59.985 }' 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:59.985 17:37:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:00.558 17:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:00.558 17:37:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.558 17:37:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:00.558 [2024-11-26 17:37:01.040447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:00.558 [2024-11-26 17:37:01.040761] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:40:00.558 [2024-11-26 17:37:01.040827] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:00.558 [2024-11-26 17:37:01.040930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:00.558 [2024-11-26 17:37:01.057818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:40:00.558 17:37:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.558 17:37:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:40:00.558 [2024-11-26 17:37:01.059946] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:01.493 "name": "raid_bdev1", 00:40:01.493 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:40:01.493 "strip_size_kb": 0, 00:40:01.493 "state": "online", 00:40:01.493 "raid_level": "raid1", 00:40:01.493 "superblock": true, 00:40:01.493 "num_base_bdevs": 4, 00:40:01.493 "num_base_bdevs_discovered": 3, 00:40:01.493 "num_base_bdevs_operational": 3, 00:40:01.493 "process": { 00:40:01.493 "type": "rebuild", 00:40:01.493 "target": "spare", 00:40:01.493 "progress": { 00:40:01.493 "blocks": 20480, 00:40:01.493 "percent": 32 00:40:01.493 } 00:40:01.493 }, 00:40:01.493 "base_bdevs_list": [ 00:40:01.493 { 00:40:01.493 "name": "spare", 00:40:01.493 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:40:01.493 "is_configured": true, 00:40:01.493 "data_offset": 2048, 00:40:01.493 "data_size": 63488 00:40:01.493 }, 00:40:01.493 { 00:40:01.493 "name": null, 00:40:01.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:01.493 "is_configured": false, 00:40:01.493 "data_offset": 2048, 00:40:01.493 "data_size": 63488 00:40:01.493 }, 00:40:01.493 { 00:40:01.493 "name": "BaseBdev3", 00:40:01.493 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:40:01.493 "is_configured": true, 00:40:01.493 "data_offset": 2048, 00:40:01.493 "data_size": 63488 00:40:01.493 }, 00:40:01.493 { 00:40:01.493 "name": "BaseBdev4", 00:40:01.493 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:40:01.493 "is_configured": true, 00:40:01.493 "data_offset": 2048, 00:40:01.493 "data_size": 63488 00:40:01.493 } 00:40:01.493 ] 00:40:01.493 }' 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:01.493 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:01.752 [2024-11-26 17:37:02.223761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:01.752 [2024-11-26 17:37:02.266122] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:01.752 [2024-11-26 17:37:02.266302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:01.752 [2024-11-26 17:37:02.266372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:01.752 [2024-11-26 17:37:02.266405] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:01.752 "name": "raid_bdev1", 00:40:01.752 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:40:01.752 "strip_size_kb": 0, 00:40:01.752 "state": "online", 00:40:01.752 "raid_level": "raid1", 00:40:01.752 "superblock": true, 00:40:01.752 "num_base_bdevs": 4, 00:40:01.752 "num_base_bdevs_discovered": 2, 00:40:01.752 "num_base_bdevs_operational": 2, 00:40:01.752 "base_bdevs_list": [ 00:40:01.752 { 00:40:01.752 "name": null, 00:40:01.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:01.752 "is_configured": false, 00:40:01.752 "data_offset": 0, 00:40:01.752 "data_size": 63488 00:40:01.752 }, 00:40:01.752 { 00:40:01.752 "name": null, 00:40:01.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:01.752 "is_configured": false, 00:40:01.752 "data_offset": 2048, 00:40:01.752 "data_size": 63488 00:40:01.752 }, 00:40:01.752 { 00:40:01.752 "name": "BaseBdev3", 00:40:01.752 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:40:01.752 "is_configured": true, 00:40:01.752 "data_offset": 2048, 00:40:01.752 "data_size": 63488 00:40:01.752 }, 00:40:01.752 { 00:40:01.752 "name": "BaseBdev4", 00:40:01.752 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:40:01.752 "is_configured": true, 00:40:01.752 "data_offset": 2048, 00:40:01.752 "data_size": 63488 00:40:01.752 } 00:40:01.752 ] 00:40:01.752 }' 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:01.752 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:02.317 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:02.317 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.317 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:02.317 [2024-11-26 17:37:02.795954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:02.317 [2024-11-26 17:37:02.796034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:02.317 [2024-11-26 17:37:02.796067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:40:02.317 [2024-11-26 17:37:02.796077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:02.317 [2024-11-26 17:37:02.796613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:02.317 [2024-11-26 17:37:02.796648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:02.317 [2024-11-26 17:37:02.796761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:40:02.317 [2024-11-26 17:37:02.796774] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:40:02.317 [2024-11-26 17:37:02.796789] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:40:02.317 [2024-11-26 17:37:02.796817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:02.317 [2024-11-26 17:37:02.814202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:40:02.317 spare 00:40:02.317 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.317 [2024-11-26 17:37:02.816357] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:02.317 17:37:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:03.253 "name": "raid_bdev1", 00:40:03.253 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:40:03.253 "strip_size_kb": 0, 00:40:03.253 "state": "online", 00:40:03.253 "raid_level": "raid1", 00:40:03.253 "superblock": true, 00:40:03.253 "num_base_bdevs": 4, 00:40:03.253 "num_base_bdevs_discovered": 3, 00:40:03.253 "num_base_bdevs_operational": 3, 00:40:03.253 "process": { 00:40:03.253 "type": "rebuild", 00:40:03.253 "target": "spare", 00:40:03.253 "progress": { 00:40:03.253 "blocks": 20480, 00:40:03.253 "percent": 32 00:40:03.253 } 00:40:03.253 }, 00:40:03.253 "base_bdevs_list": [ 00:40:03.253 { 00:40:03.253 "name": "spare", 00:40:03.253 "uuid": "5a9f615d-a6cd-503c-b9f5-4f77ac854b94", 00:40:03.253 "is_configured": true, 00:40:03.253 "data_offset": 2048, 00:40:03.253 "data_size": 63488 00:40:03.253 }, 00:40:03.253 { 00:40:03.253 "name": null, 00:40:03.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:03.253 "is_configured": false, 00:40:03.253 "data_offset": 2048, 00:40:03.253 "data_size": 63488 00:40:03.253 }, 00:40:03.253 { 00:40:03.253 "name": "BaseBdev3", 00:40:03.253 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:40:03.253 "is_configured": true, 00:40:03.253 "data_offset": 2048, 00:40:03.253 "data_size": 63488 00:40:03.253 }, 00:40:03.253 { 00:40:03.253 "name": "BaseBdev4", 00:40:03.253 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:40:03.253 "is_configured": true, 00:40:03.253 "data_offset": 2048, 00:40:03.253 "data_size": 63488 00:40:03.253 } 00:40:03.253 ] 00:40:03.253 }' 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:03.253 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:03.513 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:03.513 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:40:03.513 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.513 17:37:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:03.513 [2024-11-26 17:37:03.968216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:03.513 [2024-11-26 17:37:04.022497] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:03.513 [2024-11-26 17:37:04.022667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:03.513 [2024-11-26 17:37:04.022712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:03.513 [2024-11-26 17:37:04.022740] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:03.513 "name": "raid_bdev1", 00:40:03.513 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:40:03.513 "strip_size_kb": 0, 00:40:03.513 "state": "online", 00:40:03.513 "raid_level": "raid1", 00:40:03.513 "superblock": true, 00:40:03.513 "num_base_bdevs": 4, 00:40:03.513 "num_base_bdevs_discovered": 2, 00:40:03.513 "num_base_bdevs_operational": 2, 00:40:03.513 "base_bdevs_list": [ 00:40:03.513 { 00:40:03.513 "name": null, 00:40:03.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:03.513 "is_configured": false, 00:40:03.513 "data_offset": 0, 00:40:03.513 "data_size": 63488 00:40:03.513 }, 00:40:03.513 { 00:40:03.513 "name": null, 00:40:03.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:03.513 "is_configured": false, 00:40:03.513 "data_offset": 2048, 00:40:03.513 "data_size": 63488 00:40:03.513 }, 00:40:03.513 { 00:40:03.513 "name": "BaseBdev3", 00:40:03.513 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:40:03.513 "is_configured": true, 00:40:03.513 "data_offset": 2048, 00:40:03.513 "data_size": 63488 00:40:03.513 }, 00:40:03.513 { 00:40:03.513 "name": "BaseBdev4", 00:40:03.513 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:40:03.513 "is_configured": true, 00:40:03.513 "data_offset": 2048, 00:40:03.513 "data_size": 63488 00:40:03.513 } 00:40:03.513 ] 00:40:03.513 }' 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:03.513 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:04.081 "name": "raid_bdev1", 00:40:04.081 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:40:04.081 "strip_size_kb": 0, 00:40:04.081 "state": "online", 00:40:04.081 "raid_level": "raid1", 00:40:04.081 "superblock": true, 00:40:04.081 "num_base_bdevs": 4, 00:40:04.081 "num_base_bdevs_discovered": 2, 00:40:04.081 "num_base_bdevs_operational": 2, 00:40:04.081 "base_bdevs_list": [ 00:40:04.081 { 00:40:04.081 "name": null, 00:40:04.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:04.081 "is_configured": false, 00:40:04.081 "data_offset": 0, 00:40:04.081 "data_size": 63488 00:40:04.081 }, 00:40:04.081 { 00:40:04.081 "name": null, 00:40:04.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:04.081 "is_configured": false, 00:40:04.081 "data_offset": 2048, 00:40:04.081 "data_size": 63488 00:40:04.081 }, 00:40:04.081 { 00:40:04.081 "name": "BaseBdev3", 00:40:04.081 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:40:04.081 "is_configured": true, 00:40:04.081 "data_offset": 2048, 00:40:04.081 "data_size": 63488 00:40:04.081 }, 00:40:04.081 { 00:40:04.081 "name": "BaseBdev4", 00:40:04.081 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:40:04.081 "is_configured": true, 00:40:04.081 "data_offset": 2048, 00:40:04.081 "data_size": 63488 00:40:04.081 } 00:40:04.081 ] 00:40:04.081 }' 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:04.081 [2024-11-26 17:37:04.660854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:04.081 [2024-11-26 17:37:04.660925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:04.081 [2024-11-26 17:37:04.660946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:40:04.081 [2024-11-26 17:37:04.660960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:04.081 [2024-11-26 17:37:04.661446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:04.081 [2024-11-26 17:37:04.661467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:04.081 [2024-11-26 17:37:04.661571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:40:04.081 [2024-11-26 17:37:04.661590] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:40:04.081 [2024-11-26 17:37:04.661599] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:04.081 [2024-11-26 17:37:04.661614] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:40:04.081 BaseBdev1 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.081 17:37:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:40:05.062 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:05.062 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:05.063 "name": "raid_bdev1", 00:40:05.063 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:40:05.063 "strip_size_kb": 0, 00:40:05.063 "state": "online", 00:40:05.063 "raid_level": "raid1", 00:40:05.063 "superblock": true, 00:40:05.063 "num_base_bdevs": 4, 00:40:05.063 "num_base_bdevs_discovered": 2, 00:40:05.063 "num_base_bdevs_operational": 2, 00:40:05.063 "base_bdevs_list": [ 00:40:05.063 { 00:40:05.063 "name": null, 00:40:05.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:05.063 "is_configured": false, 00:40:05.063 "data_offset": 0, 00:40:05.063 "data_size": 63488 00:40:05.063 }, 00:40:05.063 { 00:40:05.063 "name": null, 00:40:05.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:05.063 "is_configured": false, 00:40:05.063 "data_offset": 2048, 00:40:05.063 "data_size": 63488 00:40:05.063 }, 00:40:05.063 { 00:40:05.063 "name": "BaseBdev3", 00:40:05.063 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:40:05.063 "is_configured": true, 00:40:05.063 "data_offset": 2048, 00:40:05.063 "data_size": 63488 00:40:05.063 }, 00:40:05.063 { 00:40:05.063 "name": "BaseBdev4", 00:40:05.063 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:40:05.063 "is_configured": true, 00:40:05.063 "data_offset": 2048, 00:40:05.063 "data_size": 63488 00:40:05.063 } 00:40:05.063 ] 00:40:05.063 }' 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:05.063 17:37:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.642 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:05.642 "name": "raid_bdev1", 00:40:05.642 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:40:05.642 "strip_size_kb": 0, 00:40:05.642 "state": "online", 00:40:05.642 "raid_level": "raid1", 00:40:05.642 "superblock": true, 00:40:05.642 "num_base_bdevs": 4, 00:40:05.642 "num_base_bdevs_discovered": 2, 00:40:05.642 "num_base_bdevs_operational": 2, 00:40:05.642 "base_bdevs_list": [ 00:40:05.642 { 00:40:05.642 "name": null, 00:40:05.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:05.642 "is_configured": false, 00:40:05.642 "data_offset": 0, 00:40:05.642 "data_size": 63488 00:40:05.642 }, 00:40:05.642 { 00:40:05.642 "name": null, 00:40:05.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:05.643 "is_configured": false, 00:40:05.643 "data_offset": 2048, 00:40:05.643 "data_size": 63488 00:40:05.643 }, 00:40:05.643 { 00:40:05.643 "name": "BaseBdev3", 00:40:05.643 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:40:05.643 "is_configured": true, 00:40:05.643 "data_offset": 2048, 00:40:05.643 "data_size": 63488 00:40:05.643 }, 00:40:05.643 { 00:40:05.643 "name": "BaseBdev4", 00:40:05.643 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:40:05.643 "is_configured": true, 00:40:05.643 "data_offset": 2048, 00:40:05.643 "data_size": 63488 00:40:05.643 } 00:40:05.643 ] 00:40:05.643 }' 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:05.643 [2024-11-26 17:37:06.286480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:05.643 [2024-11-26 17:37:06.286732] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:40:05.643 [2024-11-26 17:37:06.286753] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:40:05.643 request: 00:40:05.643 { 00:40:05.643 "base_bdev": "BaseBdev1", 00:40:05.643 "raid_bdev": "raid_bdev1", 00:40:05.643 "method": "bdev_raid_add_base_bdev", 00:40:05.643 "req_id": 1 00:40:05.643 } 00:40:05.643 Got JSON-RPC error response 00:40:05.643 response: 00:40:05.643 { 00:40:05.643 "code": -22, 00:40:05.643 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:40:05.643 } 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:05.643 17:37:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.021 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:07.021 "name": "raid_bdev1", 00:40:07.021 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:40:07.021 "strip_size_kb": 0, 00:40:07.021 "state": "online", 00:40:07.021 "raid_level": "raid1", 00:40:07.021 "superblock": true, 00:40:07.021 "num_base_bdevs": 4, 00:40:07.021 "num_base_bdevs_discovered": 2, 00:40:07.021 "num_base_bdevs_operational": 2, 00:40:07.021 "base_bdevs_list": [ 00:40:07.021 { 00:40:07.022 "name": null, 00:40:07.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:07.022 "is_configured": false, 00:40:07.022 "data_offset": 0, 00:40:07.022 "data_size": 63488 00:40:07.022 }, 00:40:07.022 { 00:40:07.022 "name": null, 00:40:07.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:07.022 "is_configured": false, 00:40:07.022 "data_offset": 2048, 00:40:07.022 "data_size": 63488 00:40:07.022 }, 00:40:07.022 { 00:40:07.022 "name": "BaseBdev3", 00:40:07.022 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:40:07.022 "is_configured": true, 00:40:07.022 "data_offset": 2048, 00:40:07.022 "data_size": 63488 00:40:07.022 }, 00:40:07.022 { 00:40:07.022 "name": "BaseBdev4", 00:40:07.022 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:40:07.022 "is_configured": true, 00:40:07.022 "data_offset": 2048, 00:40:07.022 "data_size": 63488 00:40:07.022 } 00:40:07.022 ] 00:40:07.022 }' 00:40:07.022 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:07.022 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:07.281 "name": "raid_bdev1", 00:40:07.281 "uuid": "c044c880-21d8-4791-af2a-82c043040134", 00:40:07.281 "strip_size_kb": 0, 00:40:07.281 "state": "online", 00:40:07.281 "raid_level": "raid1", 00:40:07.281 "superblock": true, 00:40:07.281 "num_base_bdevs": 4, 00:40:07.281 "num_base_bdevs_discovered": 2, 00:40:07.281 "num_base_bdevs_operational": 2, 00:40:07.281 "base_bdevs_list": [ 00:40:07.281 { 00:40:07.281 "name": null, 00:40:07.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:07.281 "is_configured": false, 00:40:07.281 "data_offset": 0, 00:40:07.281 "data_size": 63488 00:40:07.281 }, 00:40:07.281 { 00:40:07.281 "name": null, 00:40:07.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:07.281 "is_configured": false, 00:40:07.281 "data_offset": 2048, 00:40:07.281 "data_size": 63488 00:40:07.281 }, 00:40:07.281 { 00:40:07.281 "name": "BaseBdev3", 00:40:07.281 "uuid": "1620a8b3-cd8d-5410-8e51-679092f65397", 00:40:07.281 "is_configured": true, 00:40:07.281 "data_offset": 2048, 00:40:07.281 "data_size": 63488 00:40:07.281 }, 00:40:07.281 { 00:40:07.281 "name": "BaseBdev4", 00:40:07.281 "uuid": "d37e0088-ef7d-59c9-bcbc-42db20a8ea3a", 00:40:07.281 "is_configured": true, 00:40:07.281 "data_offset": 2048, 00:40:07.281 "data_size": 63488 00:40:07.281 } 00:40:07.281 ] 00:40:07.281 }' 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79460 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79460 ']' 00:40:07.281 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79460 00:40:07.282 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:40:07.282 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:07.282 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79460 00:40:07.282 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:07.282 killing process with pid 79460 00:40:07.282 Received shutdown signal, test time was about 18.188436 seconds 00:40:07.282 00:40:07.282 Latency(us) 00:40:07.282 [2024-11-26T17:37:07.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:07.282 [2024-11-26T17:37:07.977Z] =================================================================================================================== 00:40:07.282 [2024-11-26T17:37:07.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:07.282 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:07.282 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79460' 00:40:07.282 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79460 00:40:07.282 [2024-11-26 17:37:07.958141] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:07.282 17:37:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79460 00:40:07.282 [2024-11-26 17:37:07.958300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:07.282 [2024-11-26 17:37:07.958389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:07.282 [2024-11-26 17:37:07.958401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:40:07.850 [2024-11-26 17:37:08.420188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:09.227 17:37:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:40:09.227 00:40:09.227 real 0m21.842s 00:40:09.227 user 0m28.631s 00:40:09.227 sys 0m2.733s 00:40:09.227 17:37:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:09.227 ************************************ 00:40:09.227 END TEST raid_rebuild_test_sb_io 00:40:09.227 ************************************ 00:40:09.227 17:37:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:40:09.227 17:37:09 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:40:09.227 17:37:09 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:40:09.227 17:37:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:40:09.227 17:37:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:09.227 17:37:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:09.227 ************************************ 00:40:09.227 START TEST raid5f_state_function_test 00:40:09.227 ************************************ 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80187 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80187' 00:40:09.227 Process raid pid: 80187 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80187 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80187 ']' 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:09.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:09.227 17:37:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:09.227 [2024-11-26 17:37:09.861164] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:40:09.227 [2024-11-26 17:37:09.861371] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:09.486 [2024-11-26 17:37:10.037140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:09.486 [2024-11-26 17:37:10.164123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.744 [2024-11-26 17:37:10.408809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:09.744 [2024-11-26 17:37:10.408960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:10.365 [2024-11-26 17:37:10.746694] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:10.365 [2024-11-26 17:37:10.746754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:10.365 [2024-11-26 17:37:10.746766] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:10.365 [2024-11-26 17:37:10.746793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:10.365 [2024-11-26 17:37:10.746801] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:10.365 [2024-11-26 17:37:10.746812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:10.365 "name": "Existed_Raid", 00:40:10.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:10.365 "strip_size_kb": 64, 00:40:10.365 "state": "configuring", 00:40:10.365 "raid_level": "raid5f", 00:40:10.365 "superblock": false, 00:40:10.365 "num_base_bdevs": 3, 00:40:10.365 "num_base_bdevs_discovered": 0, 00:40:10.365 "num_base_bdevs_operational": 3, 00:40:10.365 "base_bdevs_list": [ 00:40:10.365 { 00:40:10.365 "name": "BaseBdev1", 00:40:10.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:10.365 "is_configured": false, 00:40:10.365 "data_offset": 0, 00:40:10.365 "data_size": 0 00:40:10.365 }, 00:40:10.365 { 00:40:10.365 "name": "BaseBdev2", 00:40:10.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:10.365 "is_configured": false, 00:40:10.365 "data_offset": 0, 00:40:10.365 "data_size": 0 00:40:10.365 }, 00:40:10.365 { 00:40:10.365 "name": "BaseBdev3", 00:40:10.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:10.365 "is_configured": false, 00:40:10.365 "data_offset": 0, 00:40:10.365 "data_size": 0 00:40:10.365 } 00:40:10.365 ] 00:40:10.365 }' 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:10.365 17:37:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:10.624 [2024-11-26 17:37:11.185845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:10.624 [2024-11-26 17:37:11.185883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:10.624 [2024-11-26 17:37:11.193850] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:10.624 [2024-11-26 17:37:11.193962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:10.624 [2024-11-26 17:37:11.193977] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:10.624 [2024-11-26 17:37:11.193988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:10.624 [2024-11-26 17:37:11.193995] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:10.624 [2024-11-26 17:37:11.194005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:10.624 [2024-11-26 17:37:11.237476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:10.624 BaseBdev1 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:10.624 [ 00:40:10.624 { 00:40:10.624 "name": "BaseBdev1", 00:40:10.624 "aliases": [ 00:40:10.624 "95f32bde-cb8c-405b-aa37-a064f1e0bccb" 00:40:10.624 ], 00:40:10.624 "product_name": "Malloc disk", 00:40:10.624 "block_size": 512, 00:40:10.624 "num_blocks": 65536, 00:40:10.624 "uuid": "95f32bde-cb8c-405b-aa37-a064f1e0bccb", 00:40:10.624 "assigned_rate_limits": { 00:40:10.624 "rw_ios_per_sec": 0, 00:40:10.624 "rw_mbytes_per_sec": 0, 00:40:10.624 "r_mbytes_per_sec": 0, 00:40:10.624 "w_mbytes_per_sec": 0 00:40:10.624 }, 00:40:10.624 "claimed": true, 00:40:10.624 "claim_type": "exclusive_write", 00:40:10.624 "zoned": false, 00:40:10.624 "supported_io_types": { 00:40:10.624 "read": true, 00:40:10.624 "write": true, 00:40:10.624 "unmap": true, 00:40:10.624 "flush": true, 00:40:10.624 "reset": true, 00:40:10.624 "nvme_admin": false, 00:40:10.624 "nvme_io": false, 00:40:10.624 "nvme_io_md": false, 00:40:10.624 "write_zeroes": true, 00:40:10.624 "zcopy": true, 00:40:10.624 "get_zone_info": false, 00:40:10.624 "zone_management": false, 00:40:10.624 "zone_append": false, 00:40:10.624 "compare": false, 00:40:10.624 "compare_and_write": false, 00:40:10.624 "abort": true, 00:40:10.624 "seek_hole": false, 00:40:10.624 "seek_data": false, 00:40:10.624 "copy": true, 00:40:10.624 "nvme_iov_md": false 00:40:10.624 }, 00:40:10.624 "memory_domains": [ 00:40:10.624 { 00:40:10.624 "dma_device_id": "system", 00:40:10.624 "dma_device_type": 1 00:40:10.624 }, 00:40:10.624 { 00:40:10.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:10.624 "dma_device_type": 2 00:40:10.624 } 00:40:10.624 ], 00:40:10.624 "driver_specific": {} 00:40:10.624 } 00:40:10.624 ] 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:10.624 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:10.882 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:10.882 "name": "Existed_Raid", 00:40:10.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:10.882 "strip_size_kb": 64, 00:40:10.882 "state": "configuring", 00:40:10.882 "raid_level": "raid5f", 00:40:10.882 "superblock": false, 00:40:10.882 "num_base_bdevs": 3, 00:40:10.882 "num_base_bdevs_discovered": 1, 00:40:10.882 "num_base_bdevs_operational": 3, 00:40:10.882 "base_bdevs_list": [ 00:40:10.882 { 00:40:10.882 "name": "BaseBdev1", 00:40:10.882 "uuid": "95f32bde-cb8c-405b-aa37-a064f1e0bccb", 00:40:10.882 "is_configured": true, 00:40:10.882 "data_offset": 0, 00:40:10.882 "data_size": 65536 00:40:10.882 }, 00:40:10.882 { 00:40:10.882 "name": "BaseBdev2", 00:40:10.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:10.882 "is_configured": false, 00:40:10.882 "data_offset": 0, 00:40:10.882 "data_size": 0 00:40:10.882 }, 00:40:10.882 { 00:40:10.882 "name": "BaseBdev3", 00:40:10.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:10.882 "is_configured": false, 00:40:10.882 "data_offset": 0, 00:40:10.882 "data_size": 0 00:40:10.882 } 00:40:10.882 ] 00:40:10.882 }' 00:40:10.882 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:10.882 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.141 [2024-11-26 17:37:11.684775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:11.141 [2024-11-26 17:37:11.684840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.141 [2024-11-26 17:37:11.696795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:11.141 [2024-11-26 17:37:11.698657] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:11.141 [2024-11-26 17:37:11.698692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:11.141 [2024-11-26 17:37:11.698702] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:11.141 [2024-11-26 17:37:11.698712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:11.141 "name": "Existed_Raid", 00:40:11.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:11.141 "strip_size_kb": 64, 00:40:11.141 "state": "configuring", 00:40:11.141 "raid_level": "raid5f", 00:40:11.141 "superblock": false, 00:40:11.141 "num_base_bdevs": 3, 00:40:11.141 "num_base_bdevs_discovered": 1, 00:40:11.141 "num_base_bdevs_operational": 3, 00:40:11.141 "base_bdevs_list": [ 00:40:11.141 { 00:40:11.141 "name": "BaseBdev1", 00:40:11.141 "uuid": "95f32bde-cb8c-405b-aa37-a064f1e0bccb", 00:40:11.141 "is_configured": true, 00:40:11.141 "data_offset": 0, 00:40:11.141 "data_size": 65536 00:40:11.141 }, 00:40:11.141 { 00:40:11.141 "name": "BaseBdev2", 00:40:11.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:11.141 "is_configured": false, 00:40:11.141 "data_offset": 0, 00:40:11.141 "data_size": 0 00:40:11.141 }, 00:40:11.141 { 00:40:11.141 "name": "BaseBdev3", 00:40:11.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:11.141 "is_configured": false, 00:40:11.141 "data_offset": 0, 00:40:11.141 "data_size": 0 00:40:11.141 } 00:40:11.141 ] 00:40:11.141 }' 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:11.141 17:37:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.708 [2024-11-26 17:37:12.185480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:11.708 BaseBdev2 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.708 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.708 [ 00:40:11.708 { 00:40:11.708 "name": "BaseBdev2", 00:40:11.708 "aliases": [ 00:40:11.708 "4f8782be-e168-4308-883c-652b43998772" 00:40:11.708 ], 00:40:11.708 "product_name": "Malloc disk", 00:40:11.708 "block_size": 512, 00:40:11.708 "num_blocks": 65536, 00:40:11.708 "uuid": "4f8782be-e168-4308-883c-652b43998772", 00:40:11.708 "assigned_rate_limits": { 00:40:11.708 "rw_ios_per_sec": 0, 00:40:11.708 "rw_mbytes_per_sec": 0, 00:40:11.708 "r_mbytes_per_sec": 0, 00:40:11.708 "w_mbytes_per_sec": 0 00:40:11.708 }, 00:40:11.708 "claimed": true, 00:40:11.708 "claim_type": "exclusive_write", 00:40:11.708 "zoned": false, 00:40:11.708 "supported_io_types": { 00:40:11.708 "read": true, 00:40:11.708 "write": true, 00:40:11.708 "unmap": true, 00:40:11.708 "flush": true, 00:40:11.708 "reset": true, 00:40:11.708 "nvme_admin": false, 00:40:11.708 "nvme_io": false, 00:40:11.708 "nvme_io_md": false, 00:40:11.708 "write_zeroes": true, 00:40:11.708 "zcopy": true, 00:40:11.708 "get_zone_info": false, 00:40:11.708 "zone_management": false, 00:40:11.708 "zone_append": false, 00:40:11.708 "compare": false, 00:40:11.708 "compare_and_write": false, 00:40:11.708 "abort": true, 00:40:11.708 "seek_hole": false, 00:40:11.708 "seek_data": false, 00:40:11.708 "copy": true, 00:40:11.708 "nvme_iov_md": false 00:40:11.708 }, 00:40:11.708 "memory_domains": [ 00:40:11.708 { 00:40:11.708 "dma_device_id": "system", 00:40:11.708 "dma_device_type": 1 00:40:11.709 }, 00:40:11.709 { 00:40:11.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:11.709 "dma_device_type": 2 00:40:11.709 } 00:40:11.709 ], 00:40:11.709 "driver_specific": {} 00:40:11.709 } 00:40:11.709 ] 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:11.709 "name": "Existed_Raid", 00:40:11.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:11.709 "strip_size_kb": 64, 00:40:11.709 "state": "configuring", 00:40:11.709 "raid_level": "raid5f", 00:40:11.709 "superblock": false, 00:40:11.709 "num_base_bdevs": 3, 00:40:11.709 "num_base_bdevs_discovered": 2, 00:40:11.709 "num_base_bdevs_operational": 3, 00:40:11.709 "base_bdevs_list": [ 00:40:11.709 { 00:40:11.709 "name": "BaseBdev1", 00:40:11.709 "uuid": "95f32bde-cb8c-405b-aa37-a064f1e0bccb", 00:40:11.709 "is_configured": true, 00:40:11.709 "data_offset": 0, 00:40:11.709 "data_size": 65536 00:40:11.709 }, 00:40:11.709 { 00:40:11.709 "name": "BaseBdev2", 00:40:11.709 "uuid": "4f8782be-e168-4308-883c-652b43998772", 00:40:11.709 "is_configured": true, 00:40:11.709 "data_offset": 0, 00:40:11.709 "data_size": 65536 00:40:11.709 }, 00:40:11.709 { 00:40:11.709 "name": "BaseBdev3", 00:40:11.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:11.709 "is_configured": false, 00:40:11.709 "data_offset": 0, 00:40:11.709 "data_size": 0 00:40:11.709 } 00:40:11.709 ] 00:40:11.709 }' 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:11.709 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.274 [2024-11-26 17:37:12.737629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:12.274 [2024-11-26 17:37:12.737696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:40:12.274 [2024-11-26 17:37:12.737712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:40:12.274 [2024-11-26 17:37:12.737990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:40:12.274 [2024-11-26 17:37:12.743410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:40:12.274 [2024-11-26 17:37:12.743433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:40:12.274 [2024-11-26 17:37:12.743736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:12.274 BaseBdev3 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.274 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.274 [ 00:40:12.274 { 00:40:12.274 "name": "BaseBdev3", 00:40:12.274 "aliases": [ 00:40:12.274 "331e77c4-3c30-4af1-adca-cbb2f2f89927" 00:40:12.274 ], 00:40:12.274 "product_name": "Malloc disk", 00:40:12.274 "block_size": 512, 00:40:12.274 "num_blocks": 65536, 00:40:12.274 "uuid": "331e77c4-3c30-4af1-adca-cbb2f2f89927", 00:40:12.274 "assigned_rate_limits": { 00:40:12.274 "rw_ios_per_sec": 0, 00:40:12.274 "rw_mbytes_per_sec": 0, 00:40:12.274 "r_mbytes_per_sec": 0, 00:40:12.274 "w_mbytes_per_sec": 0 00:40:12.274 }, 00:40:12.274 "claimed": true, 00:40:12.274 "claim_type": "exclusive_write", 00:40:12.274 "zoned": false, 00:40:12.274 "supported_io_types": { 00:40:12.274 "read": true, 00:40:12.274 "write": true, 00:40:12.274 "unmap": true, 00:40:12.274 "flush": true, 00:40:12.274 "reset": true, 00:40:12.274 "nvme_admin": false, 00:40:12.274 "nvme_io": false, 00:40:12.274 "nvme_io_md": false, 00:40:12.274 "write_zeroes": true, 00:40:12.274 "zcopy": true, 00:40:12.274 "get_zone_info": false, 00:40:12.274 "zone_management": false, 00:40:12.274 "zone_append": false, 00:40:12.274 "compare": false, 00:40:12.274 "compare_and_write": false, 00:40:12.274 "abort": true, 00:40:12.274 "seek_hole": false, 00:40:12.274 "seek_data": false, 00:40:12.274 "copy": true, 00:40:12.274 "nvme_iov_md": false 00:40:12.274 }, 00:40:12.275 "memory_domains": [ 00:40:12.275 { 00:40:12.275 "dma_device_id": "system", 00:40:12.275 "dma_device_type": 1 00:40:12.275 }, 00:40:12.275 { 00:40:12.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:12.275 "dma_device_type": 2 00:40:12.275 } 00:40:12.275 ], 00:40:12.275 "driver_specific": {} 00:40:12.275 } 00:40:12.275 ] 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:12.275 "name": "Existed_Raid", 00:40:12.275 "uuid": "f6ccd391-dedd-41fa-9a02-8a40b4ef5c89", 00:40:12.275 "strip_size_kb": 64, 00:40:12.275 "state": "online", 00:40:12.275 "raid_level": "raid5f", 00:40:12.275 "superblock": false, 00:40:12.275 "num_base_bdevs": 3, 00:40:12.275 "num_base_bdevs_discovered": 3, 00:40:12.275 "num_base_bdevs_operational": 3, 00:40:12.275 "base_bdevs_list": [ 00:40:12.275 { 00:40:12.275 "name": "BaseBdev1", 00:40:12.275 "uuid": "95f32bde-cb8c-405b-aa37-a064f1e0bccb", 00:40:12.275 "is_configured": true, 00:40:12.275 "data_offset": 0, 00:40:12.275 "data_size": 65536 00:40:12.275 }, 00:40:12.275 { 00:40:12.275 "name": "BaseBdev2", 00:40:12.275 "uuid": "4f8782be-e168-4308-883c-652b43998772", 00:40:12.275 "is_configured": true, 00:40:12.275 "data_offset": 0, 00:40:12.275 "data_size": 65536 00:40:12.275 }, 00:40:12.275 { 00:40:12.275 "name": "BaseBdev3", 00:40:12.275 "uuid": "331e77c4-3c30-4af1-adca-cbb2f2f89927", 00:40:12.275 "is_configured": true, 00:40:12.275 "data_offset": 0, 00:40:12.275 "data_size": 65536 00:40:12.275 } 00:40:12.275 ] 00:40:12.275 }' 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:12.275 17:37:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:12.533 [2024-11-26 17:37:13.185970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.533 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:12.533 "name": "Existed_Raid", 00:40:12.533 "aliases": [ 00:40:12.533 "f6ccd391-dedd-41fa-9a02-8a40b4ef5c89" 00:40:12.533 ], 00:40:12.533 "product_name": "Raid Volume", 00:40:12.533 "block_size": 512, 00:40:12.533 "num_blocks": 131072, 00:40:12.533 "uuid": "f6ccd391-dedd-41fa-9a02-8a40b4ef5c89", 00:40:12.533 "assigned_rate_limits": { 00:40:12.533 "rw_ios_per_sec": 0, 00:40:12.533 "rw_mbytes_per_sec": 0, 00:40:12.533 "r_mbytes_per_sec": 0, 00:40:12.533 "w_mbytes_per_sec": 0 00:40:12.533 }, 00:40:12.533 "claimed": false, 00:40:12.533 "zoned": false, 00:40:12.533 "supported_io_types": { 00:40:12.533 "read": true, 00:40:12.533 "write": true, 00:40:12.533 "unmap": false, 00:40:12.533 "flush": false, 00:40:12.533 "reset": true, 00:40:12.533 "nvme_admin": false, 00:40:12.533 "nvme_io": false, 00:40:12.533 "nvme_io_md": false, 00:40:12.533 "write_zeroes": true, 00:40:12.533 "zcopy": false, 00:40:12.533 "get_zone_info": false, 00:40:12.533 "zone_management": false, 00:40:12.533 "zone_append": false, 00:40:12.533 "compare": false, 00:40:12.533 "compare_and_write": false, 00:40:12.533 "abort": false, 00:40:12.533 "seek_hole": false, 00:40:12.533 "seek_data": false, 00:40:12.533 "copy": false, 00:40:12.533 "nvme_iov_md": false 00:40:12.533 }, 00:40:12.533 "driver_specific": { 00:40:12.533 "raid": { 00:40:12.533 "uuid": "f6ccd391-dedd-41fa-9a02-8a40b4ef5c89", 00:40:12.533 "strip_size_kb": 64, 00:40:12.533 "state": "online", 00:40:12.533 "raid_level": "raid5f", 00:40:12.533 "superblock": false, 00:40:12.533 "num_base_bdevs": 3, 00:40:12.533 "num_base_bdevs_discovered": 3, 00:40:12.533 "num_base_bdevs_operational": 3, 00:40:12.533 "base_bdevs_list": [ 00:40:12.533 { 00:40:12.533 "name": "BaseBdev1", 00:40:12.533 "uuid": "95f32bde-cb8c-405b-aa37-a064f1e0bccb", 00:40:12.533 "is_configured": true, 00:40:12.533 "data_offset": 0, 00:40:12.533 "data_size": 65536 00:40:12.533 }, 00:40:12.534 { 00:40:12.534 "name": "BaseBdev2", 00:40:12.534 "uuid": "4f8782be-e168-4308-883c-652b43998772", 00:40:12.534 "is_configured": true, 00:40:12.534 "data_offset": 0, 00:40:12.534 "data_size": 65536 00:40:12.534 }, 00:40:12.534 { 00:40:12.534 "name": "BaseBdev3", 00:40:12.534 "uuid": "331e77c4-3c30-4af1-adca-cbb2f2f89927", 00:40:12.534 "is_configured": true, 00:40:12.534 "data_offset": 0, 00:40:12.534 "data_size": 65536 00:40:12.534 } 00:40:12.534 ] 00:40:12.534 } 00:40:12.534 } 00:40:12.534 }' 00:40:12.534 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:40:12.793 BaseBdev2 00:40:12.793 BaseBdev3' 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.793 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:12.793 [2024-11-26 17:37:13.429365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:13.052 "name": "Existed_Raid", 00:40:13.052 "uuid": "f6ccd391-dedd-41fa-9a02-8a40b4ef5c89", 00:40:13.052 "strip_size_kb": 64, 00:40:13.052 "state": "online", 00:40:13.052 "raid_level": "raid5f", 00:40:13.052 "superblock": false, 00:40:13.052 "num_base_bdevs": 3, 00:40:13.052 "num_base_bdevs_discovered": 2, 00:40:13.052 "num_base_bdevs_operational": 2, 00:40:13.052 "base_bdevs_list": [ 00:40:13.052 { 00:40:13.052 "name": null, 00:40:13.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:13.052 "is_configured": false, 00:40:13.052 "data_offset": 0, 00:40:13.052 "data_size": 65536 00:40:13.052 }, 00:40:13.052 { 00:40:13.052 "name": "BaseBdev2", 00:40:13.052 "uuid": "4f8782be-e168-4308-883c-652b43998772", 00:40:13.052 "is_configured": true, 00:40:13.052 "data_offset": 0, 00:40:13.052 "data_size": 65536 00:40:13.052 }, 00:40:13.052 { 00:40:13.052 "name": "BaseBdev3", 00:40:13.052 "uuid": "331e77c4-3c30-4af1-adca-cbb2f2f89927", 00:40:13.052 "is_configured": true, 00:40:13.052 "data_offset": 0, 00:40:13.052 "data_size": 65536 00:40:13.052 } 00:40:13.052 ] 00:40:13.052 }' 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:13.052 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.311 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:40:13.311 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:13.311 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:40:13.311 17:37:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.311 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.311 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.311 17:37:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.570 [2024-11-26 17:37:14.010696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:13.570 [2024-11-26 17:37:14.010797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:13.570 [2024-11-26 17:37:14.114386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.570 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.570 [2024-11-26 17:37:14.170339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:40:13.570 [2024-11-26 17:37:14.170465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.830 BaseBdev2 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.830 [ 00:40:13.830 { 00:40:13.830 "name": "BaseBdev2", 00:40:13.830 "aliases": [ 00:40:13.830 "ba4d3015-d122-4811-a821-6285077e5730" 00:40:13.830 ], 00:40:13.830 "product_name": "Malloc disk", 00:40:13.830 "block_size": 512, 00:40:13.830 "num_blocks": 65536, 00:40:13.830 "uuid": "ba4d3015-d122-4811-a821-6285077e5730", 00:40:13.830 "assigned_rate_limits": { 00:40:13.830 "rw_ios_per_sec": 0, 00:40:13.830 "rw_mbytes_per_sec": 0, 00:40:13.830 "r_mbytes_per_sec": 0, 00:40:13.830 "w_mbytes_per_sec": 0 00:40:13.830 }, 00:40:13.830 "claimed": false, 00:40:13.830 "zoned": false, 00:40:13.830 "supported_io_types": { 00:40:13.830 "read": true, 00:40:13.830 "write": true, 00:40:13.830 "unmap": true, 00:40:13.830 "flush": true, 00:40:13.830 "reset": true, 00:40:13.830 "nvme_admin": false, 00:40:13.830 "nvme_io": false, 00:40:13.830 "nvme_io_md": false, 00:40:13.830 "write_zeroes": true, 00:40:13.830 "zcopy": true, 00:40:13.830 "get_zone_info": false, 00:40:13.830 "zone_management": false, 00:40:13.830 "zone_append": false, 00:40:13.830 "compare": false, 00:40:13.830 "compare_and_write": false, 00:40:13.830 "abort": true, 00:40:13.830 "seek_hole": false, 00:40:13.830 "seek_data": false, 00:40:13.830 "copy": true, 00:40:13.830 "nvme_iov_md": false 00:40:13.830 }, 00:40:13.830 "memory_domains": [ 00:40:13.830 { 00:40:13.830 "dma_device_id": "system", 00:40:13.830 "dma_device_type": 1 00:40:13.830 }, 00:40:13.830 { 00:40:13.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:13.830 "dma_device_type": 2 00:40:13.830 } 00:40:13.830 ], 00:40:13.830 "driver_specific": {} 00:40:13.830 } 00:40:13.830 ] 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.830 BaseBdev3 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.830 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.830 [ 00:40:13.830 { 00:40:13.830 "name": "BaseBdev3", 00:40:13.830 "aliases": [ 00:40:13.830 "ca5d9314-c1b7-40e3-94a2-29a67be65f62" 00:40:13.830 ], 00:40:13.830 "product_name": "Malloc disk", 00:40:13.830 "block_size": 512, 00:40:13.830 "num_blocks": 65536, 00:40:13.830 "uuid": "ca5d9314-c1b7-40e3-94a2-29a67be65f62", 00:40:13.830 "assigned_rate_limits": { 00:40:13.830 "rw_ios_per_sec": 0, 00:40:13.830 "rw_mbytes_per_sec": 0, 00:40:13.830 "r_mbytes_per_sec": 0, 00:40:13.830 "w_mbytes_per_sec": 0 00:40:13.830 }, 00:40:13.830 "claimed": false, 00:40:13.830 "zoned": false, 00:40:13.830 "supported_io_types": { 00:40:13.830 "read": true, 00:40:13.830 "write": true, 00:40:13.830 "unmap": true, 00:40:13.830 "flush": true, 00:40:13.830 "reset": true, 00:40:13.830 "nvme_admin": false, 00:40:13.831 "nvme_io": false, 00:40:13.831 "nvme_io_md": false, 00:40:13.831 "write_zeroes": true, 00:40:13.831 "zcopy": true, 00:40:13.831 "get_zone_info": false, 00:40:13.831 "zone_management": false, 00:40:13.831 "zone_append": false, 00:40:13.831 "compare": false, 00:40:13.831 "compare_and_write": false, 00:40:13.831 "abort": true, 00:40:13.831 "seek_hole": false, 00:40:13.831 "seek_data": false, 00:40:13.831 "copy": true, 00:40:13.831 "nvme_iov_md": false 00:40:13.831 }, 00:40:13.831 "memory_domains": [ 00:40:13.831 { 00:40:13.831 "dma_device_id": "system", 00:40:13.831 "dma_device_type": 1 00:40:13.831 }, 00:40:13.831 { 00:40:13.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:13.831 "dma_device_type": 2 00:40:13.831 } 00:40:13.831 ], 00:40:13.831 "driver_specific": {} 00:40:13.831 } 00:40:13.831 ] 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:13.831 [2024-11-26 17:37:14.502358] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:13.831 [2024-11-26 17:37:14.502530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:13.831 [2024-11-26 17:37:14.502590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:13.831 [2024-11-26 17:37:14.504506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:13.831 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:14.090 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.090 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:14.090 "name": "Existed_Raid", 00:40:14.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:14.090 "strip_size_kb": 64, 00:40:14.090 "state": "configuring", 00:40:14.090 "raid_level": "raid5f", 00:40:14.090 "superblock": false, 00:40:14.090 "num_base_bdevs": 3, 00:40:14.090 "num_base_bdevs_discovered": 2, 00:40:14.090 "num_base_bdevs_operational": 3, 00:40:14.090 "base_bdevs_list": [ 00:40:14.090 { 00:40:14.090 "name": "BaseBdev1", 00:40:14.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:14.090 "is_configured": false, 00:40:14.090 "data_offset": 0, 00:40:14.090 "data_size": 0 00:40:14.090 }, 00:40:14.090 { 00:40:14.090 "name": "BaseBdev2", 00:40:14.090 "uuid": "ba4d3015-d122-4811-a821-6285077e5730", 00:40:14.090 "is_configured": true, 00:40:14.090 "data_offset": 0, 00:40:14.090 "data_size": 65536 00:40:14.090 }, 00:40:14.090 { 00:40:14.090 "name": "BaseBdev3", 00:40:14.090 "uuid": "ca5d9314-c1b7-40e3-94a2-29a67be65f62", 00:40:14.090 "is_configured": true, 00:40:14.090 "data_offset": 0, 00:40:14.090 "data_size": 65536 00:40:14.090 } 00:40:14.090 ] 00:40:14.090 }' 00:40:14.090 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:14.090 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:14.347 17:37:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:40:14.347 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.347 17:37:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:14.347 [2024-11-26 17:37:14.997575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:14.347 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.604 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:14.604 "name": "Existed_Raid", 00:40:14.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:14.604 "strip_size_kb": 64, 00:40:14.604 "state": "configuring", 00:40:14.604 "raid_level": "raid5f", 00:40:14.604 "superblock": false, 00:40:14.604 "num_base_bdevs": 3, 00:40:14.604 "num_base_bdevs_discovered": 1, 00:40:14.604 "num_base_bdevs_operational": 3, 00:40:14.604 "base_bdevs_list": [ 00:40:14.604 { 00:40:14.604 "name": "BaseBdev1", 00:40:14.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:14.604 "is_configured": false, 00:40:14.604 "data_offset": 0, 00:40:14.604 "data_size": 0 00:40:14.604 }, 00:40:14.604 { 00:40:14.604 "name": null, 00:40:14.604 "uuid": "ba4d3015-d122-4811-a821-6285077e5730", 00:40:14.604 "is_configured": false, 00:40:14.604 "data_offset": 0, 00:40:14.604 "data_size": 65536 00:40:14.604 }, 00:40:14.604 { 00:40:14.604 "name": "BaseBdev3", 00:40:14.604 "uuid": "ca5d9314-c1b7-40e3-94a2-29a67be65f62", 00:40:14.604 "is_configured": true, 00:40:14.604 "data_offset": 0, 00:40:14.604 "data_size": 65536 00:40:14.604 } 00:40:14.604 ] 00:40:14.604 }' 00:40:14.604 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:14.604 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:14.864 [2024-11-26 17:37:15.499546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:14.864 BaseBdev1 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:14.864 [ 00:40:14.864 { 00:40:14.864 "name": "BaseBdev1", 00:40:14.864 "aliases": [ 00:40:14.864 "9b1db303-f25e-4dd7-9a44-f3a89351f992" 00:40:14.864 ], 00:40:14.864 "product_name": "Malloc disk", 00:40:14.864 "block_size": 512, 00:40:14.864 "num_blocks": 65536, 00:40:14.864 "uuid": "9b1db303-f25e-4dd7-9a44-f3a89351f992", 00:40:14.864 "assigned_rate_limits": { 00:40:14.864 "rw_ios_per_sec": 0, 00:40:14.864 "rw_mbytes_per_sec": 0, 00:40:14.864 "r_mbytes_per_sec": 0, 00:40:14.864 "w_mbytes_per_sec": 0 00:40:14.864 }, 00:40:14.864 "claimed": true, 00:40:14.864 "claim_type": "exclusive_write", 00:40:14.864 "zoned": false, 00:40:14.864 "supported_io_types": { 00:40:14.864 "read": true, 00:40:14.864 "write": true, 00:40:14.864 "unmap": true, 00:40:14.864 "flush": true, 00:40:14.864 "reset": true, 00:40:14.864 "nvme_admin": false, 00:40:14.864 "nvme_io": false, 00:40:14.864 "nvme_io_md": false, 00:40:14.864 "write_zeroes": true, 00:40:14.864 "zcopy": true, 00:40:14.864 "get_zone_info": false, 00:40:14.864 "zone_management": false, 00:40:14.864 "zone_append": false, 00:40:14.864 "compare": false, 00:40:14.864 "compare_and_write": false, 00:40:14.864 "abort": true, 00:40:14.864 "seek_hole": false, 00:40:14.864 "seek_data": false, 00:40:14.864 "copy": true, 00:40:14.864 "nvme_iov_md": false 00:40:14.864 }, 00:40:14.864 "memory_domains": [ 00:40:14.864 { 00:40:14.864 "dma_device_id": "system", 00:40:14.864 "dma_device_type": 1 00:40:14.864 }, 00:40:14.864 { 00:40:14.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:14.864 "dma_device_type": 2 00:40:14.864 } 00:40:14.864 ], 00:40:14.864 "driver_specific": {} 00:40:14.864 } 00:40:14.864 ] 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:14.864 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:14.865 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:15.123 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.123 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:15.123 "name": "Existed_Raid", 00:40:15.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:15.123 "strip_size_kb": 64, 00:40:15.123 "state": "configuring", 00:40:15.123 "raid_level": "raid5f", 00:40:15.123 "superblock": false, 00:40:15.123 "num_base_bdevs": 3, 00:40:15.123 "num_base_bdevs_discovered": 2, 00:40:15.123 "num_base_bdevs_operational": 3, 00:40:15.123 "base_bdevs_list": [ 00:40:15.123 { 00:40:15.123 "name": "BaseBdev1", 00:40:15.123 "uuid": "9b1db303-f25e-4dd7-9a44-f3a89351f992", 00:40:15.123 "is_configured": true, 00:40:15.123 "data_offset": 0, 00:40:15.123 "data_size": 65536 00:40:15.123 }, 00:40:15.123 { 00:40:15.123 "name": null, 00:40:15.123 "uuid": "ba4d3015-d122-4811-a821-6285077e5730", 00:40:15.123 "is_configured": false, 00:40:15.123 "data_offset": 0, 00:40:15.123 "data_size": 65536 00:40:15.123 }, 00:40:15.123 { 00:40:15.123 "name": "BaseBdev3", 00:40:15.123 "uuid": "ca5d9314-c1b7-40e3-94a2-29a67be65f62", 00:40:15.123 "is_configured": true, 00:40:15.123 "data_offset": 0, 00:40:15.123 "data_size": 65536 00:40:15.123 } 00:40:15.123 ] 00:40:15.123 }' 00:40:15.123 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:15.123 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:15.381 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:15.381 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.381 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:15.381 17:37:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:40:15.381 17:37:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:15.381 [2024-11-26 17:37:16.026709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:15.381 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.638 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:15.638 "name": "Existed_Raid", 00:40:15.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:15.638 "strip_size_kb": 64, 00:40:15.638 "state": "configuring", 00:40:15.638 "raid_level": "raid5f", 00:40:15.638 "superblock": false, 00:40:15.638 "num_base_bdevs": 3, 00:40:15.638 "num_base_bdevs_discovered": 1, 00:40:15.638 "num_base_bdevs_operational": 3, 00:40:15.638 "base_bdevs_list": [ 00:40:15.638 { 00:40:15.638 "name": "BaseBdev1", 00:40:15.638 "uuid": "9b1db303-f25e-4dd7-9a44-f3a89351f992", 00:40:15.638 "is_configured": true, 00:40:15.638 "data_offset": 0, 00:40:15.638 "data_size": 65536 00:40:15.638 }, 00:40:15.638 { 00:40:15.638 "name": null, 00:40:15.638 "uuid": "ba4d3015-d122-4811-a821-6285077e5730", 00:40:15.638 "is_configured": false, 00:40:15.638 "data_offset": 0, 00:40:15.638 "data_size": 65536 00:40:15.638 }, 00:40:15.638 { 00:40:15.638 "name": null, 00:40:15.638 "uuid": "ca5d9314-c1b7-40e3-94a2-29a67be65f62", 00:40:15.638 "is_configured": false, 00:40:15.638 "data_offset": 0, 00:40:15.638 "data_size": 65536 00:40:15.638 } 00:40:15.638 ] 00:40:15.638 }' 00:40:15.638 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:15.638 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:15.901 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:15.901 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.901 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:15.901 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:40:15.901 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.901 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:40:15.901 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:40:15.901 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.901 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:15.901 [2024-11-26 17:37:16.529992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:15.901 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:15.902 "name": "Existed_Raid", 00:40:15.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:15.902 "strip_size_kb": 64, 00:40:15.902 "state": "configuring", 00:40:15.902 "raid_level": "raid5f", 00:40:15.902 "superblock": false, 00:40:15.902 "num_base_bdevs": 3, 00:40:15.902 "num_base_bdevs_discovered": 2, 00:40:15.902 "num_base_bdevs_operational": 3, 00:40:15.902 "base_bdevs_list": [ 00:40:15.902 { 00:40:15.902 "name": "BaseBdev1", 00:40:15.902 "uuid": "9b1db303-f25e-4dd7-9a44-f3a89351f992", 00:40:15.902 "is_configured": true, 00:40:15.902 "data_offset": 0, 00:40:15.902 "data_size": 65536 00:40:15.902 }, 00:40:15.902 { 00:40:15.902 "name": null, 00:40:15.902 "uuid": "ba4d3015-d122-4811-a821-6285077e5730", 00:40:15.902 "is_configured": false, 00:40:15.902 "data_offset": 0, 00:40:15.902 "data_size": 65536 00:40:15.902 }, 00:40:15.902 { 00:40:15.902 "name": "BaseBdev3", 00:40:15.902 "uuid": "ca5d9314-c1b7-40e3-94a2-29a67be65f62", 00:40:15.902 "is_configured": true, 00:40:15.902 "data_offset": 0, 00:40:15.902 "data_size": 65536 00:40:15.902 } 00:40:15.902 ] 00:40:15.902 }' 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:15.902 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:16.478 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:16.478 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.478 17:37:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:16.478 17:37:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:40:16.478 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.478 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:40:16.478 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:40:16.478 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.478 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:16.478 [2024-11-26 17:37:17.057162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:16.736 "name": "Existed_Raid", 00:40:16.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:16.736 "strip_size_kb": 64, 00:40:16.736 "state": "configuring", 00:40:16.736 "raid_level": "raid5f", 00:40:16.736 "superblock": false, 00:40:16.736 "num_base_bdevs": 3, 00:40:16.736 "num_base_bdevs_discovered": 1, 00:40:16.736 "num_base_bdevs_operational": 3, 00:40:16.736 "base_bdevs_list": [ 00:40:16.736 { 00:40:16.736 "name": null, 00:40:16.736 "uuid": "9b1db303-f25e-4dd7-9a44-f3a89351f992", 00:40:16.736 "is_configured": false, 00:40:16.736 "data_offset": 0, 00:40:16.736 "data_size": 65536 00:40:16.736 }, 00:40:16.736 { 00:40:16.736 "name": null, 00:40:16.736 "uuid": "ba4d3015-d122-4811-a821-6285077e5730", 00:40:16.736 "is_configured": false, 00:40:16.736 "data_offset": 0, 00:40:16.736 "data_size": 65536 00:40:16.736 }, 00:40:16.736 { 00:40:16.736 "name": "BaseBdev3", 00:40:16.736 "uuid": "ca5d9314-c1b7-40e3-94a2-29a67be65f62", 00:40:16.736 "is_configured": true, 00:40:16.736 "data_offset": 0, 00:40:16.736 "data_size": 65536 00:40:16.736 } 00:40:16.736 ] 00:40:16.736 }' 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:16.736 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:16.994 [2024-11-26 17:37:17.658448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:16.994 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:16.995 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:16.995 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:16.995 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:16.995 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:16.995 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:16.995 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:16.995 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.995 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:16.995 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.252 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:17.252 "name": "Existed_Raid", 00:40:17.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:17.252 "strip_size_kb": 64, 00:40:17.252 "state": "configuring", 00:40:17.252 "raid_level": "raid5f", 00:40:17.252 "superblock": false, 00:40:17.252 "num_base_bdevs": 3, 00:40:17.252 "num_base_bdevs_discovered": 2, 00:40:17.252 "num_base_bdevs_operational": 3, 00:40:17.252 "base_bdevs_list": [ 00:40:17.252 { 00:40:17.252 "name": null, 00:40:17.252 "uuid": "9b1db303-f25e-4dd7-9a44-f3a89351f992", 00:40:17.252 "is_configured": false, 00:40:17.252 "data_offset": 0, 00:40:17.252 "data_size": 65536 00:40:17.252 }, 00:40:17.252 { 00:40:17.252 "name": "BaseBdev2", 00:40:17.252 "uuid": "ba4d3015-d122-4811-a821-6285077e5730", 00:40:17.252 "is_configured": true, 00:40:17.252 "data_offset": 0, 00:40:17.252 "data_size": 65536 00:40:17.252 }, 00:40:17.252 { 00:40:17.252 "name": "BaseBdev3", 00:40:17.252 "uuid": "ca5d9314-c1b7-40e3-94a2-29a67be65f62", 00:40:17.252 "is_configured": true, 00:40:17.252 "data_offset": 0, 00:40:17.252 "data_size": 65536 00:40:17.252 } 00:40:17.252 ] 00:40:17.252 }' 00:40:17.252 17:37:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:17.252 17:37:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9b1db303-f25e-4dd7-9a44-f3a89351f992 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.509 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:17.766 [2024-11-26 17:37:18.248005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:40:17.766 [2024-11-26 17:37:18.248082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:40:17.766 [2024-11-26 17:37:18.248095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:40:17.766 [2024-11-26 17:37:18.248435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:40:17.766 [2024-11-26 17:37:18.254877] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:40:17.766 [2024-11-26 17:37:18.254904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:40:17.766 [2024-11-26 17:37:18.255256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:17.766 NewBaseBdev 00:40:17.766 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.766 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:40:17.766 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:40:17.766 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:17.766 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:40:17.766 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:17.766 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:17.766 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:17.766 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.766 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:17.767 [ 00:40:17.767 { 00:40:17.767 "name": "NewBaseBdev", 00:40:17.767 "aliases": [ 00:40:17.767 "9b1db303-f25e-4dd7-9a44-f3a89351f992" 00:40:17.767 ], 00:40:17.767 "product_name": "Malloc disk", 00:40:17.767 "block_size": 512, 00:40:17.767 "num_blocks": 65536, 00:40:17.767 "uuid": "9b1db303-f25e-4dd7-9a44-f3a89351f992", 00:40:17.767 "assigned_rate_limits": { 00:40:17.767 "rw_ios_per_sec": 0, 00:40:17.767 "rw_mbytes_per_sec": 0, 00:40:17.767 "r_mbytes_per_sec": 0, 00:40:17.767 "w_mbytes_per_sec": 0 00:40:17.767 }, 00:40:17.767 "claimed": true, 00:40:17.767 "claim_type": "exclusive_write", 00:40:17.767 "zoned": false, 00:40:17.767 "supported_io_types": { 00:40:17.767 "read": true, 00:40:17.767 "write": true, 00:40:17.767 "unmap": true, 00:40:17.767 "flush": true, 00:40:17.767 "reset": true, 00:40:17.767 "nvme_admin": false, 00:40:17.767 "nvme_io": false, 00:40:17.767 "nvme_io_md": false, 00:40:17.767 "write_zeroes": true, 00:40:17.767 "zcopy": true, 00:40:17.767 "get_zone_info": false, 00:40:17.767 "zone_management": false, 00:40:17.767 "zone_append": false, 00:40:17.767 "compare": false, 00:40:17.767 "compare_and_write": false, 00:40:17.767 "abort": true, 00:40:17.767 "seek_hole": false, 00:40:17.767 "seek_data": false, 00:40:17.767 "copy": true, 00:40:17.767 "nvme_iov_md": false 00:40:17.767 }, 00:40:17.767 "memory_domains": [ 00:40:17.767 { 00:40:17.767 "dma_device_id": "system", 00:40:17.767 "dma_device_type": 1 00:40:17.767 }, 00:40:17.767 { 00:40:17.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:17.767 "dma_device_type": 2 00:40:17.767 } 00:40:17.767 ], 00:40:17.767 "driver_specific": {} 00:40:17.767 } 00:40:17.767 ] 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:17.767 "name": "Existed_Raid", 00:40:17.767 "uuid": "4ecf1ebc-f669-485a-94dc-297e39788c69", 00:40:17.767 "strip_size_kb": 64, 00:40:17.767 "state": "online", 00:40:17.767 "raid_level": "raid5f", 00:40:17.767 "superblock": false, 00:40:17.767 "num_base_bdevs": 3, 00:40:17.767 "num_base_bdevs_discovered": 3, 00:40:17.767 "num_base_bdevs_operational": 3, 00:40:17.767 "base_bdevs_list": [ 00:40:17.767 { 00:40:17.767 "name": "NewBaseBdev", 00:40:17.767 "uuid": "9b1db303-f25e-4dd7-9a44-f3a89351f992", 00:40:17.767 "is_configured": true, 00:40:17.767 "data_offset": 0, 00:40:17.767 "data_size": 65536 00:40:17.767 }, 00:40:17.767 { 00:40:17.767 "name": "BaseBdev2", 00:40:17.767 "uuid": "ba4d3015-d122-4811-a821-6285077e5730", 00:40:17.767 "is_configured": true, 00:40:17.767 "data_offset": 0, 00:40:17.767 "data_size": 65536 00:40:17.767 }, 00:40:17.767 { 00:40:17.767 "name": "BaseBdev3", 00:40:17.767 "uuid": "ca5d9314-c1b7-40e3-94a2-29a67be65f62", 00:40:17.767 "is_configured": true, 00:40:17.767 "data_offset": 0, 00:40:17.767 "data_size": 65536 00:40:17.767 } 00:40:17.767 ] 00:40:17.767 }' 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:17.767 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:18.334 [2024-11-26 17:37:18.767275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.334 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:18.334 "name": "Existed_Raid", 00:40:18.334 "aliases": [ 00:40:18.334 "4ecf1ebc-f669-485a-94dc-297e39788c69" 00:40:18.334 ], 00:40:18.334 "product_name": "Raid Volume", 00:40:18.334 "block_size": 512, 00:40:18.334 "num_blocks": 131072, 00:40:18.334 "uuid": "4ecf1ebc-f669-485a-94dc-297e39788c69", 00:40:18.334 "assigned_rate_limits": { 00:40:18.334 "rw_ios_per_sec": 0, 00:40:18.334 "rw_mbytes_per_sec": 0, 00:40:18.334 "r_mbytes_per_sec": 0, 00:40:18.334 "w_mbytes_per_sec": 0 00:40:18.334 }, 00:40:18.334 "claimed": false, 00:40:18.334 "zoned": false, 00:40:18.334 "supported_io_types": { 00:40:18.334 "read": true, 00:40:18.334 "write": true, 00:40:18.334 "unmap": false, 00:40:18.334 "flush": false, 00:40:18.335 "reset": true, 00:40:18.335 "nvme_admin": false, 00:40:18.335 "nvme_io": false, 00:40:18.335 "nvme_io_md": false, 00:40:18.335 "write_zeroes": true, 00:40:18.335 "zcopy": false, 00:40:18.335 "get_zone_info": false, 00:40:18.335 "zone_management": false, 00:40:18.335 "zone_append": false, 00:40:18.335 "compare": false, 00:40:18.335 "compare_and_write": false, 00:40:18.335 "abort": false, 00:40:18.335 "seek_hole": false, 00:40:18.335 "seek_data": false, 00:40:18.335 "copy": false, 00:40:18.335 "nvme_iov_md": false 00:40:18.335 }, 00:40:18.335 "driver_specific": { 00:40:18.335 "raid": { 00:40:18.335 "uuid": "4ecf1ebc-f669-485a-94dc-297e39788c69", 00:40:18.335 "strip_size_kb": 64, 00:40:18.335 "state": "online", 00:40:18.335 "raid_level": "raid5f", 00:40:18.335 "superblock": false, 00:40:18.335 "num_base_bdevs": 3, 00:40:18.335 "num_base_bdevs_discovered": 3, 00:40:18.335 "num_base_bdevs_operational": 3, 00:40:18.335 "base_bdevs_list": [ 00:40:18.335 { 00:40:18.335 "name": "NewBaseBdev", 00:40:18.335 "uuid": "9b1db303-f25e-4dd7-9a44-f3a89351f992", 00:40:18.335 "is_configured": true, 00:40:18.335 "data_offset": 0, 00:40:18.335 "data_size": 65536 00:40:18.335 }, 00:40:18.335 { 00:40:18.335 "name": "BaseBdev2", 00:40:18.335 "uuid": "ba4d3015-d122-4811-a821-6285077e5730", 00:40:18.335 "is_configured": true, 00:40:18.335 "data_offset": 0, 00:40:18.335 "data_size": 65536 00:40:18.335 }, 00:40:18.335 { 00:40:18.335 "name": "BaseBdev3", 00:40:18.335 "uuid": "ca5d9314-c1b7-40e3-94a2-29a67be65f62", 00:40:18.335 "is_configured": true, 00:40:18.335 "data_offset": 0, 00:40:18.335 "data_size": 65536 00:40:18.335 } 00:40:18.335 ] 00:40:18.335 } 00:40:18.335 } 00:40:18.335 }' 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:40:18.335 BaseBdev2 00:40:18.335 BaseBdev3' 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.335 17:37:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:18.335 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.593 17:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:18.593 17:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:18.593 17:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:18.593 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:18.593 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:18.593 [2024-11-26 17:37:19.050482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:18.594 [2024-11-26 17:37:19.050652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:18.594 [2024-11-26 17:37:19.050762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:18.594 [2024-11-26 17:37:19.051113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:18.594 [2024-11-26 17:37:19.051131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80187 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80187 ']' 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80187 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80187 00:40:18.594 killing process with pid 80187 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80187' 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80187 00:40:18.594 [2024-11-26 17:37:19.099161] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:18.594 17:37:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80187 00:40:18.852 [2024-11-26 17:37:19.466402] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:20.228 ************************************ 00:40:20.228 END TEST raid5f_state_function_test 00:40:20.228 ************************************ 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:40:20.228 00:40:20.228 real 0m10.897s 00:40:20.228 user 0m17.166s 00:40:20.228 sys 0m1.947s 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:40:20.228 17:37:20 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:40:20.228 17:37:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:40:20.228 17:37:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:20.228 17:37:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:20.228 ************************************ 00:40:20.228 START TEST raid5f_state_function_test_sb 00:40:20.228 ************************************ 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80810 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80810' 00:40:20.228 Process raid pid: 80810 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80810 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80810 ']' 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:20.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:20.228 17:37:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:20.228 [2024-11-26 17:37:20.825726] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:40:20.228 [2024-11-26 17:37:20.825932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:20.486 [2024-11-26 17:37:20.995577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.486 [2024-11-26 17:37:21.113483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.743 [2024-11-26 17:37:21.325590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:20.743 [2024-11-26 17:37:21.325643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:21.000 [2024-11-26 17:37:21.662126] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:21.000 [2024-11-26 17:37:21.662210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:21.000 [2024-11-26 17:37:21.662233] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:21.000 [2024-11-26 17:37:21.662249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:21.000 [2024-11-26 17:37:21.662260] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:21.000 [2024-11-26 17:37:21.662273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:21.000 17:37:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.276 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:21.276 "name": "Existed_Raid", 00:40:21.276 "uuid": "dc6ae1d4-ed00-440d-bfd3-0ada0b16c7c7", 00:40:21.276 "strip_size_kb": 64, 00:40:21.276 "state": "configuring", 00:40:21.276 "raid_level": "raid5f", 00:40:21.276 "superblock": true, 00:40:21.276 "num_base_bdevs": 3, 00:40:21.276 "num_base_bdevs_discovered": 0, 00:40:21.276 "num_base_bdevs_operational": 3, 00:40:21.276 "base_bdevs_list": [ 00:40:21.276 { 00:40:21.276 "name": "BaseBdev1", 00:40:21.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:21.276 "is_configured": false, 00:40:21.276 "data_offset": 0, 00:40:21.276 "data_size": 0 00:40:21.276 }, 00:40:21.276 { 00:40:21.276 "name": "BaseBdev2", 00:40:21.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:21.276 "is_configured": false, 00:40:21.276 "data_offset": 0, 00:40:21.276 "data_size": 0 00:40:21.276 }, 00:40:21.276 { 00:40:21.276 "name": "BaseBdev3", 00:40:21.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:21.276 "is_configured": false, 00:40:21.276 "data_offset": 0, 00:40:21.276 "data_size": 0 00:40:21.276 } 00:40:21.276 ] 00:40:21.276 }' 00:40:21.276 17:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:21.276 17:37:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:21.550 [2024-11-26 17:37:22.045473] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:21.550 [2024-11-26 17:37:22.045659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:21.550 [2024-11-26 17:37:22.057451] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:21.550 [2024-11-26 17:37:22.057584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:21.550 [2024-11-26 17:37:22.057621] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:21.550 [2024-11-26 17:37:22.057657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:21.550 [2024-11-26 17:37:22.057685] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:21.550 [2024-11-26 17:37:22.057712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:21.550 [2024-11-26 17:37:22.119760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:21.550 BaseBdev1 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:21.550 [ 00:40:21.550 { 00:40:21.550 "name": "BaseBdev1", 00:40:21.550 "aliases": [ 00:40:21.550 "bc6265ea-1e13-4e41-bd3d-8458eb1143b7" 00:40:21.550 ], 00:40:21.550 "product_name": "Malloc disk", 00:40:21.550 "block_size": 512, 00:40:21.550 "num_blocks": 65536, 00:40:21.550 "uuid": "bc6265ea-1e13-4e41-bd3d-8458eb1143b7", 00:40:21.550 "assigned_rate_limits": { 00:40:21.550 "rw_ios_per_sec": 0, 00:40:21.550 "rw_mbytes_per_sec": 0, 00:40:21.550 "r_mbytes_per_sec": 0, 00:40:21.550 "w_mbytes_per_sec": 0 00:40:21.550 }, 00:40:21.550 "claimed": true, 00:40:21.550 "claim_type": "exclusive_write", 00:40:21.550 "zoned": false, 00:40:21.550 "supported_io_types": { 00:40:21.550 "read": true, 00:40:21.550 "write": true, 00:40:21.550 "unmap": true, 00:40:21.550 "flush": true, 00:40:21.550 "reset": true, 00:40:21.550 "nvme_admin": false, 00:40:21.550 "nvme_io": false, 00:40:21.550 "nvme_io_md": false, 00:40:21.550 "write_zeroes": true, 00:40:21.550 "zcopy": true, 00:40:21.550 "get_zone_info": false, 00:40:21.550 "zone_management": false, 00:40:21.550 "zone_append": false, 00:40:21.550 "compare": false, 00:40:21.550 "compare_and_write": false, 00:40:21.550 "abort": true, 00:40:21.550 "seek_hole": false, 00:40:21.550 "seek_data": false, 00:40:21.550 "copy": true, 00:40:21.550 "nvme_iov_md": false 00:40:21.550 }, 00:40:21.550 "memory_domains": [ 00:40:21.550 { 00:40:21.550 "dma_device_id": "system", 00:40:21.550 "dma_device_type": 1 00:40:21.550 }, 00:40:21.550 { 00:40:21.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:21.550 "dma_device_type": 2 00:40:21.550 } 00:40:21.550 ], 00:40:21.550 "driver_specific": {} 00:40:21.550 } 00:40:21.550 ] 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:21.550 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:21.550 "name": "Existed_Raid", 00:40:21.550 "uuid": "ad909107-8f41-4a8b-9edd-67c2e47a2ce3", 00:40:21.550 "strip_size_kb": 64, 00:40:21.550 "state": "configuring", 00:40:21.550 "raid_level": "raid5f", 00:40:21.550 "superblock": true, 00:40:21.550 "num_base_bdevs": 3, 00:40:21.550 "num_base_bdevs_discovered": 1, 00:40:21.550 "num_base_bdevs_operational": 3, 00:40:21.550 "base_bdevs_list": [ 00:40:21.550 { 00:40:21.550 "name": "BaseBdev1", 00:40:21.550 "uuid": "bc6265ea-1e13-4e41-bd3d-8458eb1143b7", 00:40:21.550 "is_configured": true, 00:40:21.550 "data_offset": 2048, 00:40:21.550 "data_size": 63488 00:40:21.550 }, 00:40:21.550 { 00:40:21.550 "name": "BaseBdev2", 00:40:21.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:21.550 "is_configured": false, 00:40:21.550 "data_offset": 0, 00:40:21.550 "data_size": 0 00:40:21.550 }, 00:40:21.550 { 00:40:21.550 "name": "BaseBdev3", 00:40:21.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:21.550 "is_configured": false, 00:40:21.551 "data_offset": 0, 00:40:21.551 "data_size": 0 00:40:21.551 } 00:40:21.551 ] 00:40:21.551 }' 00:40:21.551 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:21.551 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:22.115 [2024-11-26 17:37:22.591295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:22.115 [2024-11-26 17:37:22.591380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:22.115 [2024-11-26 17:37:22.599329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:22.115 [2024-11-26 17:37:22.601789] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:40:22.115 [2024-11-26 17:37:22.601856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:40:22.115 [2024-11-26 17:37:22.601869] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:40:22.115 [2024-11-26 17:37:22.601880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:22.115 "name": "Existed_Raid", 00:40:22.115 "uuid": "7a6449a4-a84c-4c19-bdeb-003cf15131d4", 00:40:22.115 "strip_size_kb": 64, 00:40:22.115 "state": "configuring", 00:40:22.115 "raid_level": "raid5f", 00:40:22.115 "superblock": true, 00:40:22.115 "num_base_bdevs": 3, 00:40:22.115 "num_base_bdevs_discovered": 1, 00:40:22.115 "num_base_bdevs_operational": 3, 00:40:22.115 "base_bdevs_list": [ 00:40:22.115 { 00:40:22.115 "name": "BaseBdev1", 00:40:22.115 "uuid": "bc6265ea-1e13-4e41-bd3d-8458eb1143b7", 00:40:22.115 "is_configured": true, 00:40:22.115 "data_offset": 2048, 00:40:22.115 "data_size": 63488 00:40:22.115 }, 00:40:22.115 { 00:40:22.115 "name": "BaseBdev2", 00:40:22.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:22.115 "is_configured": false, 00:40:22.115 "data_offset": 0, 00:40:22.115 "data_size": 0 00:40:22.115 }, 00:40:22.115 { 00:40:22.115 "name": "BaseBdev3", 00:40:22.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:22.115 "is_configured": false, 00:40:22.115 "data_offset": 0, 00:40:22.115 "data_size": 0 00:40:22.115 } 00:40:22.115 ] 00:40:22.115 }' 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:22.115 17:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:22.374 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:40:22.374 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.374 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:22.633 [2024-11-26 17:37:23.080499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:22.633 BaseBdev2 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:22.633 [ 00:40:22.633 { 00:40:22.633 "name": "BaseBdev2", 00:40:22.633 "aliases": [ 00:40:22.633 "978e229f-ab46-4c5f-aa8b-43daecb01596" 00:40:22.633 ], 00:40:22.633 "product_name": "Malloc disk", 00:40:22.633 "block_size": 512, 00:40:22.633 "num_blocks": 65536, 00:40:22.633 "uuid": "978e229f-ab46-4c5f-aa8b-43daecb01596", 00:40:22.633 "assigned_rate_limits": { 00:40:22.633 "rw_ios_per_sec": 0, 00:40:22.633 "rw_mbytes_per_sec": 0, 00:40:22.633 "r_mbytes_per_sec": 0, 00:40:22.633 "w_mbytes_per_sec": 0 00:40:22.633 }, 00:40:22.633 "claimed": true, 00:40:22.633 "claim_type": "exclusive_write", 00:40:22.633 "zoned": false, 00:40:22.633 "supported_io_types": { 00:40:22.633 "read": true, 00:40:22.633 "write": true, 00:40:22.633 "unmap": true, 00:40:22.633 "flush": true, 00:40:22.633 "reset": true, 00:40:22.633 "nvme_admin": false, 00:40:22.633 "nvme_io": false, 00:40:22.633 "nvme_io_md": false, 00:40:22.633 "write_zeroes": true, 00:40:22.633 "zcopy": true, 00:40:22.633 "get_zone_info": false, 00:40:22.633 "zone_management": false, 00:40:22.633 "zone_append": false, 00:40:22.633 "compare": false, 00:40:22.633 "compare_and_write": false, 00:40:22.633 "abort": true, 00:40:22.633 "seek_hole": false, 00:40:22.633 "seek_data": false, 00:40:22.633 "copy": true, 00:40:22.633 "nvme_iov_md": false 00:40:22.633 }, 00:40:22.633 "memory_domains": [ 00:40:22.633 { 00:40:22.633 "dma_device_id": "system", 00:40:22.633 "dma_device_type": 1 00:40:22.633 }, 00:40:22.633 { 00:40:22.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:22.633 "dma_device_type": 2 00:40:22.633 } 00:40:22.633 ], 00:40:22.633 "driver_specific": {} 00:40:22.633 } 00:40:22.633 ] 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:22.633 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:22.633 "name": "Existed_Raid", 00:40:22.633 "uuid": "7a6449a4-a84c-4c19-bdeb-003cf15131d4", 00:40:22.633 "strip_size_kb": 64, 00:40:22.633 "state": "configuring", 00:40:22.633 "raid_level": "raid5f", 00:40:22.633 "superblock": true, 00:40:22.633 "num_base_bdevs": 3, 00:40:22.633 "num_base_bdevs_discovered": 2, 00:40:22.633 "num_base_bdevs_operational": 3, 00:40:22.633 "base_bdevs_list": [ 00:40:22.633 { 00:40:22.633 "name": "BaseBdev1", 00:40:22.633 "uuid": "bc6265ea-1e13-4e41-bd3d-8458eb1143b7", 00:40:22.633 "is_configured": true, 00:40:22.633 "data_offset": 2048, 00:40:22.633 "data_size": 63488 00:40:22.633 }, 00:40:22.633 { 00:40:22.633 "name": "BaseBdev2", 00:40:22.633 "uuid": "978e229f-ab46-4c5f-aa8b-43daecb01596", 00:40:22.633 "is_configured": true, 00:40:22.633 "data_offset": 2048, 00:40:22.633 "data_size": 63488 00:40:22.633 }, 00:40:22.633 { 00:40:22.633 "name": "BaseBdev3", 00:40:22.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:22.634 "is_configured": false, 00:40:22.634 "data_offset": 0, 00:40:22.634 "data_size": 0 00:40:22.634 } 00:40:22.634 ] 00:40:22.634 }' 00:40:22.634 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:22.634 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:22.892 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:40:22.892 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:22.892 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.151 [2024-11-26 17:37:23.611603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:23.151 [2024-11-26 17:37:23.611947] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:40:23.151 [2024-11-26 17:37:23.611972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:23.151 BaseBdev3 00:40:23.151 [2024-11-26 17:37:23.612518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.151 [2024-11-26 17:37:23.619327] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:40:23.151 [2024-11-26 17:37:23.619354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:40:23.151 [2024-11-26 17:37:23.619681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.151 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.151 [ 00:40:23.151 { 00:40:23.151 "name": "BaseBdev3", 00:40:23.151 "aliases": [ 00:40:23.151 "362694e8-bf74-4007-bd4f-5b71cae3961d" 00:40:23.151 ], 00:40:23.151 "product_name": "Malloc disk", 00:40:23.151 "block_size": 512, 00:40:23.151 "num_blocks": 65536, 00:40:23.151 "uuid": "362694e8-bf74-4007-bd4f-5b71cae3961d", 00:40:23.151 "assigned_rate_limits": { 00:40:23.151 "rw_ios_per_sec": 0, 00:40:23.151 "rw_mbytes_per_sec": 0, 00:40:23.151 "r_mbytes_per_sec": 0, 00:40:23.151 "w_mbytes_per_sec": 0 00:40:23.151 }, 00:40:23.151 "claimed": true, 00:40:23.151 "claim_type": "exclusive_write", 00:40:23.151 "zoned": false, 00:40:23.151 "supported_io_types": { 00:40:23.151 "read": true, 00:40:23.151 "write": true, 00:40:23.151 "unmap": true, 00:40:23.151 "flush": true, 00:40:23.151 "reset": true, 00:40:23.151 "nvme_admin": false, 00:40:23.151 "nvme_io": false, 00:40:23.151 "nvme_io_md": false, 00:40:23.151 "write_zeroes": true, 00:40:23.151 "zcopy": true, 00:40:23.151 "get_zone_info": false, 00:40:23.151 "zone_management": false, 00:40:23.151 "zone_append": false, 00:40:23.151 "compare": false, 00:40:23.151 "compare_and_write": false, 00:40:23.151 "abort": true, 00:40:23.151 "seek_hole": false, 00:40:23.151 "seek_data": false, 00:40:23.151 "copy": true, 00:40:23.151 "nvme_iov_md": false 00:40:23.151 }, 00:40:23.151 "memory_domains": [ 00:40:23.151 { 00:40:23.151 "dma_device_id": "system", 00:40:23.151 "dma_device_type": 1 00:40:23.151 }, 00:40:23.152 { 00:40:23.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:23.152 "dma_device_type": 2 00:40:23.152 } 00:40:23.152 ], 00:40:23.152 "driver_specific": {} 00:40:23.152 } 00:40:23.152 ] 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:23.152 "name": "Existed_Raid", 00:40:23.152 "uuid": "7a6449a4-a84c-4c19-bdeb-003cf15131d4", 00:40:23.152 "strip_size_kb": 64, 00:40:23.152 "state": "online", 00:40:23.152 "raid_level": "raid5f", 00:40:23.152 "superblock": true, 00:40:23.152 "num_base_bdevs": 3, 00:40:23.152 "num_base_bdevs_discovered": 3, 00:40:23.152 "num_base_bdevs_operational": 3, 00:40:23.152 "base_bdevs_list": [ 00:40:23.152 { 00:40:23.152 "name": "BaseBdev1", 00:40:23.152 "uuid": "bc6265ea-1e13-4e41-bd3d-8458eb1143b7", 00:40:23.152 "is_configured": true, 00:40:23.152 "data_offset": 2048, 00:40:23.152 "data_size": 63488 00:40:23.152 }, 00:40:23.152 { 00:40:23.152 "name": "BaseBdev2", 00:40:23.152 "uuid": "978e229f-ab46-4c5f-aa8b-43daecb01596", 00:40:23.152 "is_configured": true, 00:40:23.152 "data_offset": 2048, 00:40:23.152 "data_size": 63488 00:40:23.152 }, 00:40:23.152 { 00:40:23.152 "name": "BaseBdev3", 00:40:23.152 "uuid": "362694e8-bf74-4007-bd4f-5b71cae3961d", 00:40:23.152 "is_configured": true, 00:40:23.152 "data_offset": 2048, 00:40:23.152 "data_size": 63488 00:40:23.152 } 00:40:23.152 ] 00:40:23.152 }' 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:23.152 17:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.718 [2024-11-26 17:37:24.115369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.718 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:23.718 "name": "Existed_Raid", 00:40:23.718 "aliases": [ 00:40:23.718 "7a6449a4-a84c-4c19-bdeb-003cf15131d4" 00:40:23.718 ], 00:40:23.718 "product_name": "Raid Volume", 00:40:23.718 "block_size": 512, 00:40:23.718 "num_blocks": 126976, 00:40:23.718 "uuid": "7a6449a4-a84c-4c19-bdeb-003cf15131d4", 00:40:23.718 "assigned_rate_limits": { 00:40:23.718 "rw_ios_per_sec": 0, 00:40:23.718 "rw_mbytes_per_sec": 0, 00:40:23.718 "r_mbytes_per_sec": 0, 00:40:23.718 "w_mbytes_per_sec": 0 00:40:23.718 }, 00:40:23.718 "claimed": false, 00:40:23.718 "zoned": false, 00:40:23.718 "supported_io_types": { 00:40:23.718 "read": true, 00:40:23.718 "write": true, 00:40:23.718 "unmap": false, 00:40:23.718 "flush": false, 00:40:23.718 "reset": true, 00:40:23.718 "nvme_admin": false, 00:40:23.719 "nvme_io": false, 00:40:23.719 "nvme_io_md": false, 00:40:23.719 "write_zeroes": true, 00:40:23.719 "zcopy": false, 00:40:23.719 "get_zone_info": false, 00:40:23.719 "zone_management": false, 00:40:23.719 "zone_append": false, 00:40:23.719 "compare": false, 00:40:23.719 "compare_and_write": false, 00:40:23.719 "abort": false, 00:40:23.719 "seek_hole": false, 00:40:23.719 "seek_data": false, 00:40:23.719 "copy": false, 00:40:23.719 "nvme_iov_md": false 00:40:23.719 }, 00:40:23.719 "driver_specific": { 00:40:23.719 "raid": { 00:40:23.719 "uuid": "7a6449a4-a84c-4c19-bdeb-003cf15131d4", 00:40:23.719 "strip_size_kb": 64, 00:40:23.719 "state": "online", 00:40:23.719 "raid_level": "raid5f", 00:40:23.719 "superblock": true, 00:40:23.719 "num_base_bdevs": 3, 00:40:23.719 "num_base_bdevs_discovered": 3, 00:40:23.719 "num_base_bdevs_operational": 3, 00:40:23.719 "base_bdevs_list": [ 00:40:23.719 { 00:40:23.719 "name": "BaseBdev1", 00:40:23.719 "uuid": "bc6265ea-1e13-4e41-bd3d-8458eb1143b7", 00:40:23.719 "is_configured": true, 00:40:23.719 "data_offset": 2048, 00:40:23.719 "data_size": 63488 00:40:23.719 }, 00:40:23.719 { 00:40:23.719 "name": "BaseBdev2", 00:40:23.719 "uuid": "978e229f-ab46-4c5f-aa8b-43daecb01596", 00:40:23.719 "is_configured": true, 00:40:23.719 "data_offset": 2048, 00:40:23.719 "data_size": 63488 00:40:23.719 }, 00:40:23.719 { 00:40:23.719 "name": "BaseBdev3", 00:40:23.719 "uuid": "362694e8-bf74-4007-bd4f-5b71cae3961d", 00:40:23.719 "is_configured": true, 00:40:23.719 "data_offset": 2048, 00:40:23.719 "data_size": 63488 00:40:23.719 } 00:40:23.719 ] 00:40:23.719 } 00:40:23.719 } 00:40:23.719 }' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:40:23.719 BaseBdev2 00:40:23.719 BaseBdev3' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.719 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.719 [2024-11-26 17:37:24.394718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:23.977 "name": "Existed_Raid", 00:40:23.977 "uuid": "7a6449a4-a84c-4c19-bdeb-003cf15131d4", 00:40:23.977 "strip_size_kb": 64, 00:40:23.977 "state": "online", 00:40:23.977 "raid_level": "raid5f", 00:40:23.977 "superblock": true, 00:40:23.977 "num_base_bdevs": 3, 00:40:23.977 "num_base_bdevs_discovered": 2, 00:40:23.977 "num_base_bdevs_operational": 2, 00:40:23.977 "base_bdevs_list": [ 00:40:23.977 { 00:40:23.977 "name": null, 00:40:23.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:23.977 "is_configured": false, 00:40:23.977 "data_offset": 0, 00:40:23.977 "data_size": 63488 00:40:23.977 }, 00:40:23.977 { 00:40:23.977 "name": "BaseBdev2", 00:40:23.977 "uuid": "978e229f-ab46-4c5f-aa8b-43daecb01596", 00:40:23.977 "is_configured": true, 00:40:23.977 "data_offset": 2048, 00:40:23.977 "data_size": 63488 00:40:23.977 }, 00:40:23.977 { 00:40:23.977 "name": "BaseBdev3", 00:40:23.977 "uuid": "362694e8-bf74-4007-bd4f-5b71cae3961d", 00:40:23.977 "is_configured": true, 00:40:23.977 "data_offset": 2048, 00:40:23.977 "data_size": 63488 00:40:23.977 } 00:40:23.977 ] 00:40:23.977 }' 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:23.977 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.543 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:40:24.543 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:24.543 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:24.543 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.543 17:37:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.543 17:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:40:24.543 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.543 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:40:24.543 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:24.543 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:40:24.543 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.543 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.543 [2024-11-26 17:37:25.052312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:24.543 [2024-11-26 17:37:25.052519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:24.543 [2024-11-26 17:37:25.147805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:24.543 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.544 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.544 [2024-11-26 17:37:25.199748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:40:24.544 [2024-11-26 17:37:25.199801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.802 BaseBdev2 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.802 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.802 [ 00:40:24.802 { 00:40:24.802 "name": "BaseBdev2", 00:40:24.802 "aliases": [ 00:40:24.802 "4585c8c7-82ed-4720-8364-87857dfcc3c9" 00:40:24.802 ], 00:40:24.802 "product_name": "Malloc disk", 00:40:24.802 "block_size": 512, 00:40:24.802 "num_blocks": 65536, 00:40:24.802 "uuid": "4585c8c7-82ed-4720-8364-87857dfcc3c9", 00:40:24.802 "assigned_rate_limits": { 00:40:24.802 "rw_ios_per_sec": 0, 00:40:24.802 "rw_mbytes_per_sec": 0, 00:40:24.802 "r_mbytes_per_sec": 0, 00:40:24.802 "w_mbytes_per_sec": 0 00:40:24.802 }, 00:40:24.802 "claimed": false, 00:40:24.802 "zoned": false, 00:40:24.802 "supported_io_types": { 00:40:24.802 "read": true, 00:40:24.802 "write": true, 00:40:24.802 "unmap": true, 00:40:24.802 "flush": true, 00:40:24.802 "reset": true, 00:40:24.802 "nvme_admin": false, 00:40:24.802 "nvme_io": false, 00:40:24.802 "nvme_io_md": false, 00:40:24.802 "write_zeroes": true, 00:40:24.802 "zcopy": true, 00:40:24.802 "get_zone_info": false, 00:40:24.802 "zone_management": false, 00:40:24.802 "zone_append": false, 00:40:24.802 "compare": false, 00:40:24.802 "compare_and_write": false, 00:40:24.802 "abort": true, 00:40:24.802 "seek_hole": false, 00:40:24.802 "seek_data": false, 00:40:24.802 "copy": true, 00:40:24.802 "nvme_iov_md": false 00:40:24.802 }, 00:40:24.802 "memory_domains": [ 00:40:24.802 { 00:40:24.802 "dma_device_id": "system", 00:40:24.802 "dma_device_type": 1 00:40:24.802 }, 00:40:24.802 { 00:40:24.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:24.803 "dma_device_type": 2 00:40:24.803 } 00:40:24.803 ], 00:40:24.803 "driver_specific": {} 00:40:24.803 } 00:40:24.803 ] 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:24.803 BaseBdev3 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:24.803 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.061 [ 00:40:25.061 { 00:40:25.061 "name": "BaseBdev3", 00:40:25.061 "aliases": [ 00:40:25.061 "a3454bd8-247d-4002-93b6-fe0b672d23b1" 00:40:25.061 ], 00:40:25.061 "product_name": "Malloc disk", 00:40:25.061 "block_size": 512, 00:40:25.061 "num_blocks": 65536, 00:40:25.061 "uuid": "a3454bd8-247d-4002-93b6-fe0b672d23b1", 00:40:25.061 "assigned_rate_limits": { 00:40:25.061 "rw_ios_per_sec": 0, 00:40:25.061 "rw_mbytes_per_sec": 0, 00:40:25.061 "r_mbytes_per_sec": 0, 00:40:25.061 "w_mbytes_per_sec": 0 00:40:25.061 }, 00:40:25.061 "claimed": false, 00:40:25.061 "zoned": false, 00:40:25.061 "supported_io_types": { 00:40:25.061 "read": true, 00:40:25.061 "write": true, 00:40:25.061 "unmap": true, 00:40:25.061 "flush": true, 00:40:25.061 "reset": true, 00:40:25.061 "nvme_admin": false, 00:40:25.061 "nvme_io": false, 00:40:25.061 "nvme_io_md": false, 00:40:25.061 "write_zeroes": true, 00:40:25.061 "zcopy": true, 00:40:25.061 "get_zone_info": false, 00:40:25.061 "zone_management": false, 00:40:25.061 "zone_append": false, 00:40:25.061 "compare": false, 00:40:25.061 "compare_and_write": false, 00:40:25.061 "abort": true, 00:40:25.061 "seek_hole": false, 00:40:25.061 "seek_data": false, 00:40:25.061 "copy": true, 00:40:25.061 "nvme_iov_md": false 00:40:25.061 }, 00:40:25.061 "memory_domains": [ 00:40:25.061 { 00:40:25.061 "dma_device_id": "system", 00:40:25.061 "dma_device_type": 1 00:40:25.061 }, 00:40:25.061 { 00:40:25.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:25.061 "dma_device_type": 2 00:40:25.061 } 00:40:25.061 ], 00:40:25.061 "driver_specific": {} 00:40:25.061 } 00:40:25.061 ] 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.061 [2024-11-26 17:37:25.529854] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:40:25.061 [2024-11-26 17:37:25.529900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:40:25.061 [2024-11-26 17:37:25.529939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:25.061 [2024-11-26 17:37:25.531902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:25.061 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:25.062 "name": "Existed_Raid", 00:40:25.062 "uuid": "24a34597-0330-422e-bb85-02e59751b896", 00:40:25.062 "strip_size_kb": 64, 00:40:25.062 "state": "configuring", 00:40:25.062 "raid_level": "raid5f", 00:40:25.062 "superblock": true, 00:40:25.062 "num_base_bdevs": 3, 00:40:25.062 "num_base_bdevs_discovered": 2, 00:40:25.062 "num_base_bdevs_operational": 3, 00:40:25.062 "base_bdevs_list": [ 00:40:25.062 { 00:40:25.062 "name": "BaseBdev1", 00:40:25.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:25.062 "is_configured": false, 00:40:25.062 "data_offset": 0, 00:40:25.062 "data_size": 0 00:40:25.062 }, 00:40:25.062 { 00:40:25.062 "name": "BaseBdev2", 00:40:25.062 "uuid": "4585c8c7-82ed-4720-8364-87857dfcc3c9", 00:40:25.062 "is_configured": true, 00:40:25.062 "data_offset": 2048, 00:40:25.062 "data_size": 63488 00:40:25.062 }, 00:40:25.062 { 00:40:25.062 "name": "BaseBdev3", 00:40:25.062 "uuid": "a3454bd8-247d-4002-93b6-fe0b672d23b1", 00:40:25.062 "is_configured": true, 00:40:25.062 "data_offset": 2048, 00:40:25.062 "data_size": 63488 00:40:25.062 } 00:40:25.062 ] 00:40:25.062 }' 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:25.062 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.320 [2024-11-26 17:37:25.989116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:25.320 17:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:25.320 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.320 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.320 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:25.579 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.579 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:25.579 "name": "Existed_Raid", 00:40:25.579 "uuid": "24a34597-0330-422e-bb85-02e59751b896", 00:40:25.579 "strip_size_kb": 64, 00:40:25.579 "state": "configuring", 00:40:25.579 "raid_level": "raid5f", 00:40:25.579 "superblock": true, 00:40:25.579 "num_base_bdevs": 3, 00:40:25.579 "num_base_bdevs_discovered": 1, 00:40:25.579 "num_base_bdevs_operational": 3, 00:40:25.579 "base_bdevs_list": [ 00:40:25.579 { 00:40:25.579 "name": "BaseBdev1", 00:40:25.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:25.579 "is_configured": false, 00:40:25.579 "data_offset": 0, 00:40:25.579 "data_size": 0 00:40:25.579 }, 00:40:25.579 { 00:40:25.579 "name": null, 00:40:25.579 "uuid": "4585c8c7-82ed-4720-8364-87857dfcc3c9", 00:40:25.579 "is_configured": false, 00:40:25.579 "data_offset": 0, 00:40:25.579 "data_size": 63488 00:40:25.579 }, 00:40:25.579 { 00:40:25.579 "name": "BaseBdev3", 00:40:25.579 "uuid": "a3454bd8-247d-4002-93b6-fe0b672d23b1", 00:40:25.579 "is_configured": true, 00:40:25.579 "data_offset": 2048, 00:40:25.579 "data_size": 63488 00:40:25.579 } 00:40:25.579 ] 00:40:25.579 }' 00:40:25.579 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:25.579 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.930 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:25.930 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:40:25.930 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.930 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.930 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.930 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.931 [2024-11-26 17:37:26.566127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:25.931 BaseBdev1 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.931 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:25.931 [ 00:40:25.931 { 00:40:25.931 "name": "BaseBdev1", 00:40:25.931 "aliases": [ 00:40:25.931 "893bb53a-0916-4426-b438-52ad47e7d2b8" 00:40:25.931 ], 00:40:25.931 "product_name": "Malloc disk", 00:40:25.931 "block_size": 512, 00:40:25.931 "num_blocks": 65536, 00:40:25.931 "uuid": "893bb53a-0916-4426-b438-52ad47e7d2b8", 00:40:25.931 "assigned_rate_limits": { 00:40:25.931 "rw_ios_per_sec": 0, 00:40:25.931 "rw_mbytes_per_sec": 0, 00:40:25.931 "r_mbytes_per_sec": 0, 00:40:25.931 "w_mbytes_per_sec": 0 00:40:25.931 }, 00:40:25.931 "claimed": true, 00:40:25.931 "claim_type": "exclusive_write", 00:40:25.931 "zoned": false, 00:40:25.931 "supported_io_types": { 00:40:25.931 "read": true, 00:40:25.931 "write": true, 00:40:25.931 "unmap": true, 00:40:25.931 "flush": true, 00:40:25.931 "reset": true, 00:40:25.931 "nvme_admin": false, 00:40:25.931 "nvme_io": false, 00:40:25.931 "nvme_io_md": false, 00:40:25.931 "write_zeroes": true, 00:40:25.931 "zcopy": true, 00:40:25.931 "get_zone_info": false, 00:40:25.931 "zone_management": false, 00:40:25.931 "zone_append": false, 00:40:25.931 "compare": false, 00:40:25.931 "compare_and_write": false, 00:40:25.931 "abort": true, 00:40:25.931 "seek_hole": false, 00:40:25.931 "seek_data": false, 00:40:25.931 "copy": true, 00:40:25.931 "nvme_iov_md": false 00:40:26.189 }, 00:40:26.189 "memory_domains": [ 00:40:26.189 { 00:40:26.189 "dma_device_id": "system", 00:40:26.189 "dma_device_type": 1 00:40:26.189 }, 00:40:26.189 { 00:40:26.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:26.189 "dma_device_type": 2 00:40:26.189 } 00:40:26.189 ], 00:40:26.189 "driver_specific": {} 00:40:26.189 } 00:40:26.189 ] 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:26.189 "name": "Existed_Raid", 00:40:26.189 "uuid": "24a34597-0330-422e-bb85-02e59751b896", 00:40:26.189 "strip_size_kb": 64, 00:40:26.189 "state": "configuring", 00:40:26.189 "raid_level": "raid5f", 00:40:26.189 "superblock": true, 00:40:26.189 "num_base_bdevs": 3, 00:40:26.189 "num_base_bdevs_discovered": 2, 00:40:26.189 "num_base_bdevs_operational": 3, 00:40:26.189 "base_bdevs_list": [ 00:40:26.189 { 00:40:26.189 "name": "BaseBdev1", 00:40:26.189 "uuid": "893bb53a-0916-4426-b438-52ad47e7d2b8", 00:40:26.189 "is_configured": true, 00:40:26.189 "data_offset": 2048, 00:40:26.189 "data_size": 63488 00:40:26.189 }, 00:40:26.189 { 00:40:26.189 "name": null, 00:40:26.189 "uuid": "4585c8c7-82ed-4720-8364-87857dfcc3c9", 00:40:26.189 "is_configured": false, 00:40:26.189 "data_offset": 0, 00:40:26.189 "data_size": 63488 00:40:26.189 }, 00:40:26.189 { 00:40:26.189 "name": "BaseBdev3", 00:40:26.189 "uuid": "a3454bd8-247d-4002-93b6-fe0b672d23b1", 00:40:26.189 "is_configured": true, 00:40:26.189 "data_offset": 2048, 00:40:26.189 "data_size": 63488 00:40:26.189 } 00:40:26.189 ] 00:40:26.189 }' 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:26.189 17:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.448 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.448 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:40:26.448 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.448 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.448 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.448 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:40:26.448 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:40:26.448 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.448 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.448 [2024-11-26 17:37:27.129252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:26.449 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:26.707 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:26.707 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.707 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.707 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.707 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.707 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:26.707 "name": "Existed_Raid", 00:40:26.707 "uuid": "24a34597-0330-422e-bb85-02e59751b896", 00:40:26.707 "strip_size_kb": 64, 00:40:26.707 "state": "configuring", 00:40:26.707 "raid_level": "raid5f", 00:40:26.707 "superblock": true, 00:40:26.707 "num_base_bdevs": 3, 00:40:26.707 "num_base_bdevs_discovered": 1, 00:40:26.707 "num_base_bdevs_operational": 3, 00:40:26.707 "base_bdevs_list": [ 00:40:26.707 { 00:40:26.707 "name": "BaseBdev1", 00:40:26.707 "uuid": "893bb53a-0916-4426-b438-52ad47e7d2b8", 00:40:26.707 "is_configured": true, 00:40:26.707 "data_offset": 2048, 00:40:26.707 "data_size": 63488 00:40:26.707 }, 00:40:26.707 { 00:40:26.707 "name": null, 00:40:26.707 "uuid": "4585c8c7-82ed-4720-8364-87857dfcc3c9", 00:40:26.707 "is_configured": false, 00:40:26.707 "data_offset": 0, 00:40:26.707 "data_size": 63488 00:40:26.707 }, 00:40:26.707 { 00:40:26.707 "name": null, 00:40:26.707 "uuid": "a3454bd8-247d-4002-93b6-fe0b672d23b1", 00:40:26.707 "is_configured": false, 00:40:26.707 "data_offset": 0, 00:40:26.707 "data_size": 63488 00:40:26.707 } 00:40:26.707 ] 00:40:26.707 }' 00:40:26.708 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:26.708 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.966 [2024-11-26 17:37:27.624640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:26.966 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:26.967 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:26.967 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:26.967 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:26.967 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:26.967 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:26.967 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:26.967 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:26.967 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:26.967 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.225 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:27.225 "name": "Existed_Raid", 00:40:27.225 "uuid": "24a34597-0330-422e-bb85-02e59751b896", 00:40:27.225 "strip_size_kb": 64, 00:40:27.225 "state": "configuring", 00:40:27.225 "raid_level": "raid5f", 00:40:27.225 "superblock": true, 00:40:27.225 "num_base_bdevs": 3, 00:40:27.225 "num_base_bdevs_discovered": 2, 00:40:27.225 "num_base_bdevs_operational": 3, 00:40:27.225 "base_bdevs_list": [ 00:40:27.225 { 00:40:27.225 "name": "BaseBdev1", 00:40:27.225 "uuid": "893bb53a-0916-4426-b438-52ad47e7d2b8", 00:40:27.225 "is_configured": true, 00:40:27.225 "data_offset": 2048, 00:40:27.225 "data_size": 63488 00:40:27.225 }, 00:40:27.225 { 00:40:27.225 "name": null, 00:40:27.225 "uuid": "4585c8c7-82ed-4720-8364-87857dfcc3c9", 00:40:27.225 "is_configured": false, 00:40:27.225 "data_offset": 0, 00:40:27.225 "data_size": 63488 00:40:27.225 }, 00:40:27.225 { 00:40:27.225 "name": "BaseBdev3", 00:40:27.225 "uuid": "a3454bd8-247d-4002-93b6-fe0b672d23b1", 00:40:27.225 "is_configured": true, 00:40:27.225 "data_offset": 2048, 00:40:27.225 "data_size": 63488 00:40:27.225 } 00:40:27.225 ] 00:40:27.225 }' 00:40:27.225 17:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:27.225 17:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:27.484 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:40:27.484 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:27.484 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.484 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:27.484 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.484 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:40:27.484 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:40:27.484 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.484 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:27.484 [2024-11-26 17:37:28.147836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:27.743 "name": "Existed_Raid", 00:40:27.743 "uuid": "24a34597-0330-422e-bb85-02e59751b896", 00:40:27.743 "strip_size_kb": 64, 00:40:27.743 "state": "configuring", 00:40:27.743 "raid_level": "raid5f", 00:40:27.743 "superblock": true, 00:40:27.743 "num_base_bdevs": 3, 00:40:27.743 "num_base_bdevs_discovered": 1, 00:40:27.743 "num_base_bdevs_operational": 3, 00:40:27.743 "base_bdevs_list": [ 00:40:27.743 { 00:40:27.743 "name": null, 00:40:27.743 "uuid": "893bb53a-0916-4426-b438-52ad47e7d2b8", 00:40:27.743 "is_configured": false, 00:40:27.743 "data_offset": 0, 00:40:27.743 "data_size": 63488 00:40:27.743 }, 00:40:27.743 { 00:40:27.743 "name": null, 00:40:27.743 "uuid": "4585c8c7-82ed-4720-8364-87857dfcc3c9", 00:40:27.743 "is_configured": false, 00:40:27.743 "data_offset": 0, 00:40:27.743 "data_size": 63488 00:40:27.743 }, 00:40:27.743 { 00:40:27.743 "name": "BaseBdev3", 00:40:27.743 "uuid": "a3454bd8-247d-4002-93b6-fe0b672d23b1", 00:40:27.743 "is_configured": true, 00:40:27.743 "data_offset": 2048, 00:40:27.743 "data_size": 63488 00:40:27.743 } 00:40:27.743 ] 00:40:27.743 }' 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:27.743 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.311 [2024-11-26 17:37:28.753946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.311 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:28.311 "name": "Existed_Raid", 00:40:28.311 "uuid": "24a34597-0330-422e-bb85-02e59751b896", 00:40:28.311 "strip_size_kb": 64, 00:40:28.311 "state": "configuring", 00:40:28.311 "raid_level": "raid5f", 00:40:28.311 "superblock": true, 00:40:28.311 "num_base_bdevs": 3, 00:40:28.311 "num_base_bdevs_discovered": 2, 00:40:28.311 "num_base_bdevs_operational": 3, 00:40:28.311 "base_bdevs_list": [ 00:40:28.311 { 00:40:28.311 "name": null, 00:40:28.311 "uuid": "893bb53a-0916-4426-b438-52ad47e7d2b8", 00:40:28.311 "is_configured": false, 00:40:28.311 "data_offset": 0, 00:40:28.311 "data_size": 63488 00:40:28.311 }, 00:40:28.311 { 00:40:28.311 "name": "BaseBdev2", 00:40:28.311 "uuid": "4585c8c7-82ed-4720-8364-87857dfcc3c9", 00:40:28.311 "is_configured": true, 00:40:28.311 "data_offset": 2048, 00:40:28.311 "data_size": 63488 00:40:28.311 }, 00:40:28.311 { 00:40:28.311 "name": "BaseBdev3", 00:40:28.311 "uuid": "a3454bd8-247d-4002-93b6-fe0b672d23b1", 00:40:28.312 "is_configured": true, 00:40:28.312 "data_offset": 2048, 00:40:28.312 "data_size": 63488 00:40:28.312 } 00:40:28.312 ] 00:40:28.312 }' 00:40:28.312 17:37:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:28.312 17:37:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.570 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 893bb53a-0916-4426-b438-52ad47e7d2b8 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.571 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.830 [2024-11-26 17:37:29.286799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:40:28.830 [2024-11-26 17:37:29.287130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:40:28.830 [2024-11-26 17:37:29.287172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:28.830 [2024-11-26 17:37:29.287454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:40:28.830 NewBaseBdev 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.830 [2024-11-26 17:37:29.293097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:40:28.830 [2024-11-26 17:37:29.293156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:40:28.830 [2024-11-26 17:37:29.293353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.830 [ 00:40:28.830 { 00:40:28.830 "name": "NewBaseBdev", 00:40:28.830 "aliases": [ 00:40:28.830 "893bb53a-0916-4426-b438-52ad47e7d2b8" 00:40:28.830 ], 00:40:28.830 "product_name": "Malloc disk", 00:40:28.830 "block_size": 512, 00:40:28.830 "num_blocks": 65536, 00:40:28.830 "uuid": "893bb53a-0916-4426-b438-52ad47e7d2b8", 00:40:28.830 "assigned_rate_limits": { 00:40:28.830 "rw_ios_per_sec": 0, 00:40:28.830 "rw_mbytes_per_sec": 0, 00:40:28.830 "r_mbytes_per_sec": 0, 00:40:28.830 "w_mbytes_per_sec": 0 00:40:28.830 }, 00:40:28.830 "claimed": true, 00:40:28.830 "claim_type": "exclusive_write", 00:40:28.830 "zoned": false, 00:40:28.830 "supported_io_types": { 00:40:28.830 "read": true, 00:40:28.830 "write": true, 00:40:28.830 "unmap": true, 00:40:28.830 "flush": true, 00:40:28.830 "reset": true, 00:40:28.830 "nvme_admin": false, 00:40:28.830 "nvme_io": false, 00:40:28.830 "nvme_io_md": false, 00:40:28.830 "write_zeroes": true, 00:40:28.830 "zcopy": true, 00:40:28.830 "get_zone_info": false, 00:40:28.830 "zone_management": false, 00:40:28.830 "zone_append": false, 00:40:28.830 "compare": false, 00:40:28.830 "compare_and_write": false, 00:40:28.830 "abort": true, 00:40:28.830 "seek_hole": false, 00:40:28.830 "seek_data": false, 00:40:28.830 "copy": true, 00:40:28.830 "nvme_iov_md": false 00:40:28.830 }, 00:40:28.830 "memory_domains": [ 00:40:28.830 { 00:40:28.830 "dma_device_id": "system", 00:40:28.830 "dma_device_type": 1 00:40:28.830 }, 00:40:28.830 { 00:40:28.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:40:28.830 "dma_device_type": 2 00:40:28.830 } 00:40:28.830 ], 00:40:28.830 "driver_specific": {} 00:40:28.830 } 00:40:28.830 ] 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:28.830 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:28.830 "name": "Existed_Raid", 00:40:28.830 "uuid": "24a34597-0330-422e-bb85-02e59751b896", 00:40:28.830 "strip_size_kb": 64, 00:40:28.830 "state": "online", 00:40:28.830 "raid_level": "raid5f", 00:40:28.830 "superblock": true, 00:40:28.830 "num_base_bdevs": 3, 00:40:28.830 "num_base_bdevs_discovered": 3, 00:40:28.830 "num_base_bdevs_operational": 3, 00:40:28.830 "base_bdevs_list": [ 00:40:28.830 { 00:40:28.831 "name": "NewBaseBdev", 00:40:28.831 "uuid": "893bb53a-0916-4426-b438-52ad47e7d2b8", 00:40:28.831 "is_configured": true, 00:40:28.831 "data_offset": 2048, 00:40:28.831 "data_size": 63488 00:40:28.831 }, 00:40:28.831 { 00:40:28.831 "name": "BaseBdev2", 00:40:28.831 "uuid": "4585c8c7-82ed-4720-8364-87857dfcc3c9", 00:40:28.831 "is_configured": true, 00:40:28.831 "data_offset": 2048, 00:40:28.831 "data_size": 63488 00:40:28.831 }, 00:40:28.831 { 00:40:28.831 "name": "BaseBdev3", 00:40:28.831 "uuid": "a3454bd8-247d-4002-93b6-fe0b672d23b1", 00:40:28.831 "is_configured": true, 00:40:28.831 "data_offset": 2048, 00:40:28.831 "data_size": 63488 00:40:28.831 } 00:40:28.831 ] 00:40:28.831 }' 00:40:28.831 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:28.831 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:29.400 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:40:29.400 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:40:29.400 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:29.400 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:29.401 [2024-11-26 17:37:29.799177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:29.401 "name": "Existed_Raid", 00:40:29.401 "aliases": [ 00:40:29.401 "24a34597-0330-422e-bb85-02e59751b896" 00:40:29.401 ], 00:40:29.401 "product_name": "Raid Volume", 00:40:29.401 "block_size": 512, 00:40:29.401 "num_blocks": 126976, 00:40:29.401 "uuid": "24a34597-0330-422e-bb85-02e59751b896", 00:40:29.401 "assigned_rate_limits": { 00:40:29.401 "rw_ios_per_sec": 0, 00:40:29.401 "rw_mbytes_per_sec": 0, 00:40:29.401 "r_mbytes_per_sec": 0, 00:40:29.401 "w_mbytes_per_sec": 0 00:40:29.401 }, 00:40:29.401 "claimed": false, 00:40:29.401 "zoned": false, 00:40:29.401 "supported_io_types": { 00:40:29.401 "read": true, 00:40:29.401 "write": true, 00:40:29.401 "unmap": false, 00:40:29.401 "flush": false, 00:40:29.401 "reset": true, 00:40:29.401 "nvme_admin": false, 00:40:29.401 "nvme_io": false, 00:40:29.401 "nvme_io_md": false, 00:40:29.401 "write_zeroes": true, 00:40:29.401 "zcopy": false, 00:40:29.401 "get_zone_info": false, 00:40:29.401 "zone_management": false, 00:40:29.401 "zone_append": false, 00:40:29.401 "compare": false, 00:40:29.401 "compare_and_write": false, 00:40:29.401 "abort": false, 00:40:29.401 "seek_hole": false, 00:40:29.401 "seek_data": false, 00:40:29.401 "copy": false, 00:40:29.401 "nvme_iov_md": false 00:40:29.401 }, 00:40:29.401 "driver_specific": { 00:40:29.401 "raid": { 00:40:29.401 "uuid": "24a34597-0330-422e-bb85-02e59751b896", 00:40:29.401 "strip_size_kb": 64, 00:40:29.401 "state": "online", 00:40:29.401 "raid_level": "raid5f", 00:40:29.401 "superblock": true, 00:40:29.401 "num_base_bdevs": 3, 00:40:29.401 "num_base_bdevs_discovered": 3, 00:40:29.401 "num_base_bdevs_operational": 3, 00:40:29.401 "base_bdevs_list": [ 00:40:29.401 { 00:40:29.401 "name": "NewBaseBdev", 00:40:29.401 "uuid": "893bb53a-0916-4426-b438-52ad47e7d2b8", 00:40:29.401 "is_configured": true, 00:40:29.401 "data_offset": 2048, 00:40:29.401 "data_size": 63488 00:40:29.401 }, 00:40:29.401 { 00:40:29.401 "name": "BaseBdev2", 00:40:29.401 "uuid": "4585c8c7-82ed-4720-8364-87857dfcc3c9", 00:40:29.401 "is_configured": true, 00:40:29.401 "data_offset": 2048, 00:40:29.401 "data_size": 63488 00:40:29.401 }, 00:40:29.401 { 00:40:29.401 "name": "BaseBdev3", 00:40:29.401 "uuid": "a3454bd8-247d-4002-93b6-fe0b672d23b1", 00:40:29.401 "is_configured": true, 00:40:29.401 "data_offset": 2048, 00:40:29.401 "data_size": 63488 00:40:29.401 } 00:40:29.401 ] 00:40:29.401 } 00:40:29.401 } 00:40:29.401 }' 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:40:29.401 BaseBdev2 00:40:29.401 BaseBdev3' 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.401 17:37:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:29.401 [2024-11-26 17:37:30.062543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:40:29.401 [2024-11-26 17:37:30.062576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:29.401 [2024-11-26 17:37:30.062670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:29.401 [2024-11-26 17:37:30.062988] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:29.401 [2024-11-26 17:37:30.063004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80810 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80810 ']' 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80810 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80810 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:29.401 killing process with pid 80810 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80810' 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80810 00:40:29.401 17:37:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80810 00:40:29.401 [2024-11-26 17:37:30.093431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:29.968 [2024-11-26 17:37:30.396210] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:30.959 17:37:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:40:30.959 00:40:30.959 real 0m10.817s 00:40:30.959 user 0m17.118s 00:40:30.959 sys 0m1.949s 00:40:30.959 17:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:30.959 ************************************ 00:40:30.959 END TEST raid5f_state_function_test_sb 00:40:30.959 ************************************ 00:40:30.959 17:37:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:30.959 17:37:31 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:40:30.959 17:37:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:30.959 17:37:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:30.959 17:37:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:30.959 ************************************ 00:40:30.959 START TEST raid5f_superblock_test 00:40:30.959 ************************************ 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81436 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81436 00:40:30.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81436 ']' 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:30.959 17:37:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:31.218 [2024-11-26 17:37:31.708283] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:40:31.218 [2024-11-26 17:37:31.708488] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81436 ] 00:40:31.218 [2024-11-26 17:37:31.881764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:31.531 [2024-11-26 17:37:31.995861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.531 [2024-11-26 17:37:32.194131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:31.531 [2024-11-26 17:37:32.194287] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.099 malloc1 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.099 [2024-11-26 17:37:32.613140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:32.099 [2024-11-26 17:37:32.613306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:32.099 [2024-11-26 17:37:32.613360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:40:32.099 [2024-11-26 17:37:32.613400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:32.099 [2024-11-26 17:37:32.615796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:32.099 [2024-11-26 17:37:32.615869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:32.099 pt1 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.099 malloc2 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.099 [2024-11-26 17:37:32.666052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:32.099 [2024-11-26 17:37:32.666151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:32.099 [2024-11-26 17:37:32.666210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:40:32.099 [2024-11-26 17:37:32.666239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:32.099 [2024-11-26 17:37:32.668340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:32.099 [2024-11-26 17:37:32.668409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:32.099 pt2 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.099 malloc3 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.099 [2024-11-26 17:37:32.740709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:40:32.099 [2024-11-26 17:37:32.740819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:32.099 [2024-11-26 17:37:32.740859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:40:32.099 [2024-11-26 17:37:32.740890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:32.099 [2024-11-26 17:37:32.742937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:32.099 [2024-11-26 17:37:32.743005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:40:32.099 pt3 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:40:32.099 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.100 [2024-11-26 17:37:32.752736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:32.100 [2024-11-26 17:37:32.754510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:32.100 [2024-11-26 17:37:32.754591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:40:32.100 [2024-11-26 17:37:32.754760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:40:32.100 [2024-11-26 17:37:32.754780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:32.100 [2024-11-26 17:37:32.755004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:40:32.100 [2024-11-26 17:37:32.760251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:40:32.100 [2024-11-26 17:37:32.760270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:40:32.100 [2024-11-26 17:37:32.760470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.100 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.359 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:32.359 "name": "raid_bdev1", 00:40:32.359 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:32.359 "strip_size_kb": 64, 00:40:32.359 "state": "online", 00:40:32.359 "raid_level": "raid5f", 00:40:32.359 "superblock": true, 00:40:32.359 "num_base_bdevs": 3, 00:40:32.359 "num_base_bdevs_discovered": 3, 00:40:32.359 "num_base_bdevs_operational": 3, 00:40:32.359 "base_bdevs_list": [ 00:40:32.359 { 00:40:32.359 "name": "pt1", 00:40:32.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:32.359 "is_configured": true, 00:40:32.359 "data_offset": 2048, 00:40:32.359 "data_size": 63488 00:40:32.359 }, 00:40:32.359 { 00:40:32.359 "name": "pt2", 00:40:32.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:32.359 "is_configured": true, 00:40:32.359 "data_offset": 2048, 00:40:32.359 "data_size": 63488 00:40:32.359 }, 00:40:32.359 { 00:40:32.359 "name": "pt3", 00:40:32.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:32.359 "is_configured": true, 00:40:32.359 "data_offset": 2048, 00:40:32.359 "data_size": 63488 00:40:32.359 } 00:40:32.359 ] 00:40:32.359 }' 00:40:32.359 17:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:32.359 17:37:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.618 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:32.619 [2024-11-26 17:37:33.214207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:32.619 "name": "raid_bdev1", 00:40:32.619 "aliases": [ 00:40:32.619 "0a554d56-c507-4a31-95df-a5d36844ad37" 00:40:32.619 ], 00:40:32.619 "product_name": "Raid Volume", 00:40:32.619 "block_size": 512, 00:40:32.619 "num_blocks": 126976, 00:40:32.619 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:32.619 "assigned_rate_limits": { 00:40:32.619 "rw_ios_per_sec": 0, 00:40:32.619 "rw_mbytes_per_sec": 0, 00:40:32.619 "r_mbytes_per_sec": 0, 00:40:32.619 "w_mbytes_per_sec": 0 00:40:32.619 }, 00:40:32.619 "claimed": false, 00:40:32.619 "zoned": false, 00:40:32.619 "supported_io_types": { 00:40:32.619 "read": true, 00:40:32.619 "write": true, 00:40:32.619 "unmap": false, 00:40:32.619 "flush": false, 00:40:32.619 "reset": true, 00:40:32.619 "nvme_admin": false, 00:40:32.619 "nvme_io": false, 00:40:32.619 "nvme_io_md": false, 00:40:32.619 "write_zeroes": true, 00:40:32.619 "zcopy": false, 00:40:32.619 "get_zone_info": false, 00:40:32.619 "zone_management": false, 00:40:32.619 "zone_append": false, 00:40:32.619 "compare": false, 00:40:32.619 "compare_and_write": false, 00:40:32.619 "abort": false, 00:40:32.619 "seek_hole": false, 00:40:32.619 "seek_data": false, 00:40:32.619 "copy": false, 00:40:32.619 "nvme_iov_md": false 00:40:32.619 }, 00:40:32.619 "driver_specific": { 00:40:32.619 "raid": { 00:40:32.619 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:32.619 "strip_size_kb": 64, 00:40:32.619 "state": "online", 00:40:32.619 "raid_level": "raid5f", 00:40:32.619 "superblock": true, 00:40:32.619 "num_base_bdevs": 3, 00:40:32.619 "num_base_bdevs_discovered": 3, 00:40:32.619 "num_base_bdevs_operational": 3, 00:40:32.619 "base_bdevs_list": [ 00:40:32.619 { 00:40:32.619 "name": "pt1", 00:40:32.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:32.619 "is_configured": true, 00:40:32.619 "data_offset": 2048, 00:40:32.619 "data_size": 63488 00:40:32.619 }, 00:40:32.619 { 00:40:32.619 "name": "pt2", 00:40:32.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:32.619 "is_configured": true, 00:40:32.619 "data_offset": 2048, 00:40:32.619 "data_size": 63488 00:40:32.619 }, 00:40:32.619 { 00:40:32.619 "name": "pt3", 00:40:32.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:32.619 "is_configured": true, 00:40:32.619 "data_offset": 2048, 00:40:32.619 "data_size": 63488 00:40:32.619 } 00:40:32.619 ] 00:40:32.619 } 00:40:32.619 } 00:40:32.619 }' 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:40:32.619 pt2 00:40:32.619 pt3' 00:40:32.619 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:40:32.879 [2024-11-26 17:37:33.461738] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0a554d56-c507-4a31-95df-a5d36844ad37 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0a554d56-c507-4a31-95df-a5d36844ad37 ']' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.879 [2024-11-26 17:37:33.509453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:32.879 [2024-11-26 17:37:33.509488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:32.879 [2024-11-26 17:37:33.509591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:32.879 [2024-11-26 17:37:33.509692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:32.879 [2024-11-26 17:37:33.509705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.879 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.139 [2024-11-26 17:37:33.649289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:40:33.139 [2024-11-26 17:37:33.651337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:40:33.139 [2024-11-26 17:37:33.651459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:40:33.139 [2024-11-26 17:37:33.651551] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:40:33.139 [2024-11-26 17:37:33.651659] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:40:33.139 [2024-11-26 17:37:33.651726] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:40:33.139 [2024-11-26 17:37:33.651781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:33.139 [2024-11-26 17:37:33.651811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:40:33.139 request: 00:40:33.139 { 00:40:33.139 "name": "raid_bdev1", 00:40:33.139 "raid_level": "raid5f", 00:40:33.139 "base_bdevs": [ 00:40:33.139 "malloc1", 00:40:33.139 "malloc2", 00:40:33.139 "malloc3" 00:40:33.139 ], 00:40:33.139 "strip_size_kb": 64, 00:40:33.139 "superblock": false, 00:40:33.139 "method": "bdev_raid_create", 00:40:33.139 "req_id": 1 00:40:33.139 } 00:40:33.139 Got JSON-RPC error response 00:40:33.139 response: 00:40:33.139 { 00:40:33.139 "code": -17, 00:40:33.139 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:40:33.139 } 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.139 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.140 [2024-11-26 17:37:33.705125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:33.140 [2024-11-26 17:37:33.705215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:33.140 [2024-11-26 17:37:33.705237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:40:33.140 [2024-11-26 17:37:33.705248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:33.140 [2024-11-26 17:37:33.707819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:33.140 [2024-11-26 17:37:33.707859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:33.140 [2024-11-26 17:37:33.707975] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:40:33.140 [2024-11-26 17:37:33.708043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:33.140 pt1 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:33.140 "name": "raid_bdev1", 00:40:33.140 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:33.140 "strip_size_kb": 64, 00:40:33.140 "state": "configuring", 00:40:33.140 "raid_level": "raid5f", 00:40:33.140 "superblock": true, 00:40:33.140 "num_base_bdevs": 3, 00:40:33.140 "num_base_bdevs_discovered": 1, 00:40:33.140 "num_base_bdevs_operational": 3, 00:40:33.140 "base_bdevs_list": [ 00:40:33.140 { 00:40:33.140 "name": "pt1", 00:40:33.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:33.140 "is_configured": true, 00:40:33.140 "data_offset": 2048, 00:40:33.140 "data_size": 63488 00:40:33.140 }, 00:40:33.140 { 00:40:33.140 "name": null, 00:40:33.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:33.140 "is_configured": false, 00:40:33.140 "data_offset": 2048, 00:40:33.140 "data_size": 63488 00:40:33.140 }, 00:40:33.140 { 00:40:33.140 "name": null, 00:40:33.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:33.140 "is_configured": false, 00:40:33.140 "data_offset": 2048, 00:40:33.140 "data_size": 63488 00:40:33.140 } 00:40:33.140 ] 00:40:33.140 }' 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:33.140 17:37:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.710 [2024-11-26 17:37:34.124458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:33.710 [2024-11-26 17:37:34.124594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:33.710 [2024-11-26 17:37:34.124635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:40:33.710 [2024-11-26 17:37:34.124664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:33.710 [2024-11-26 17:37:34.125117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:33.710 [2024-11-26 17:37:34.125182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:33.710 [2024-11-26 17:37:34.125299] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:40:33.710 [2024-11-26 17:37:34.125355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:33.710 pt2 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.710 [2024-11-26 17:37:34.132425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:33.710 "name": "raid_bdev1", 00:40:33.710 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:33.710 "strip_size_kb": 64, 00:40:33.710 "state": "configuring", 00:40:33.710 "raid_level": "raid5f", 00:40:33.710 "superblock": true, 00:40:33.710 "num_base_bdevs": 3, 00:40:33.710 "num_base_bdevs_discovered": 1, 00:40:33.710 "num_base_bdevs_operational": 3, 00:40:33.710 "base_bdevs_list": [ 00:40:33.710 { 00:40:33.710 "name": "pt1", 00:40:33.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:33.710 "is_configured": true, 00:40:33.710 "data_offset": 2048, 00:40:33.710 "data_size": 63488 00:40:33.710 }, 00:40:33.710 { 00:40:33.710 "name": null, 00:40:33.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:33.710 "is_configured": false, 00:40:33.710 "data_offset": 0, 00:40:33.710 "data_size": 63488 00:40:33.710 }, 00:40:33.710 { 00:40:33.710 "name": null, 00:40:33.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:33.710 "is_configured": false, 00:40:33.710 "data_offset": 2048, 00:40:33.710 "data_size": 63488 00:40:33.710 } 00:40:33.710 ] 00:40:33.710 }' 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:33.710 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.969 [2024-11-26 17:37:34.523741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:33.969 [2024-11-26 17:37:34.523832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:33.969 [2024-11-26 17:37:34.523853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:40:33.969 [2024-11-26 17:37:34.523875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:33.969 [2024-11-26 17:37:34.524335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:33.969 [2024-11-26 17:37:34.524356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:33.969 [2024-11-26 17:37:34.524462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:40:33.969 [2024-11-26 17:37:34.524490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:33.969 pt2 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.969 [2024-11-26 17:37:34.535743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:40:33.969 [2024-11-26 17:37:34.535796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:33.969 [2024-11-26 17:37:34.535811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:40:33.969 [2024-11-26 17:37:34.535821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:33.969 [2024-11-26 17:37:34.536185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:33.969 [2024-11-26 17:37:34.536205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:40:33.969 [2024-11-26 17:37:34.536266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:40:33.969 [2024-11-26 17:37:34.536285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:40:33.969 [2024-11-26 17:37:34.536426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:40:33.969 [2024-11-26 17:37:34.536441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:33.969 [2024-11-26 17:37:34.536697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:40:33.969 [2024-11-26 17:37:34.541884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:40:33.969 [2024-11-26 17:37:34.541904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:40:33.969 [2024-11-26 17:37:34.542078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:33.969 pt3 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:40:33.969 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:33.970 "name": "raid_bdev1", 00:40:33.970 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:33.970 "strip_size_kb": 64, 00:40:33.970 "state": "online", 00:40:33.970 "raid_level": "raid5f", 00:40:33.970 "superblock": true, 00:40:33.970 "num_base_bdevs": 3, 00:40:33.970 "num_base_bdevs_discovered": 3, 00:40:33.970 "num_base_bdevs_operational": 3, 00:40:33.970 "base_bdevs_list": [ 00:40:33.970 { 00:40:33.970 "name": "pt1", 00:40:33.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:33.970 "is_configured": true, 00:40:33.970 "data_offset": 2048, 00:40:33.970 "data_size": 63488 00:40:33.970 }, 00:40:33.970 { 00:40:33.970 "name": "pt2", 00:40:33.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:33.970 "is_configured": true, 00:40:33.970 "data_offset": 2048, 00:40:33.970 "data_size": 63488 00:40:33.970 }, 00:40:33.970 { 00:40:33.970 "name": "pt3", 00:40:33.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:33.970 "is_configured": true, 00:40:33.970 "data_offset": 2048, 00:40:33.970 "data_size": 63488 00:40:33.970 } 00:40:33.970 ] 00:40:33.970 }' 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:33.970 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:34.539 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:40:34.539 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:40:34.539 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:40:34.539 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:40:34.539 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:40:34.539 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:40:34.539 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:34.539 17:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:40:34.539 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.539 17:37:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:34.539 [2024-11-26 17:37:34.988084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:40:34.539 "name": "raid_bdev1", 00:40:34.539 "aliases": [ 00:40:34.539 "0a554d56-c507-4a31-95df-a5d36844ad37" 00:40:34.539 ], 00:40:34.539 "product_name": "Raid Volume", 00:40:34.539 "block_size": 512, 00:40:34.539 "num_blocks": 126976, 00:40:34.539 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:34.539 "assigned_rate_limits": { 00:40:34.539 "rw_ios_per_sec": 0, 00:40:34.539 "rw_mbytes_per_sec": 0, 00:40:34.539 "r_mbytes_per_sec": 0, 00:40:34.539 "w_mbytes_per_sec": 0 00:40:34.539 }, 00:40:34.539 "claimed": false, 00:40:34.539 "zoned": false, 00:40:34.539 "supported_io_types": { 00:40:34.539 "read": true, 00:40:34.539 "write": true, 00:40:34.539 "unmap": false, 00:40:34.539 "flush": false, 00:40:34.539 "reset": true, 00:40:34.539 "nvme_admin": false, 00:40:34.539 "nvme_io": false, 00:40:34.539 "nvme_io_md": false, 00:40:34.539 "write_zeroes": true, 00:40:34.539 "zcopy": false, 00:40:34.539 "get_zone_info": false, 00:40:34.539 "zone_management": false, 00:40:34.539 "zone_append": false, 00:40:34.539 "compare": false, 00:40:34.539 "compare_and_write": false, 00:40:34.539 "abort": false, 00:40:34.539 "seek_hole": false, 00:40:34.539 "seek_data": false, 00:40:34.539 "copy": false, 00:40:34.539 "nvme_iov_md": false 00:40:34.539 }, 00:40:34.539 "driver_specific": { 00:40:34.539 "raid": { 00:40:34.539 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:34.539 "strip_size_kb": 64, 00:40:34.539 "state": "online", 00:40:34.539 "raid_level": "raid5f", 00:40:34.539 "superblock": true, 00:40:34.539 "num_base_bdevs": 3, 00:40:34.539 "num_base_bdevs_discovered": 3, 00:40:34.539 "num_base_bdevs_operational": 3, 00:40:34.539 "base_bdevs_list": [ 00:40:34.539 { 00:40:34.539 "name": "pt1", 00:40:34.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:40:34.539 "is_configured": true, 00:40:34.539 "data_offset": 2048, 00:40:34.539 "data_size": 63488 00:40:34.539 }, 00:40:34.539 { 00:40:34.539 "name": "pt2", 00:40:34.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:34.539 "is_configured": true, 00:40:34.539 "data_offset": 2048, 00:40:34.539 "data_size": 63488 00:40:34.539 }, 00:40:34.539 { 00:40:34.539 "name": "pt3", 00:40:34.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:34.539 "is_configured": true, 00:40:34.539 "data_offset": 2048, 00:40:34.539 "data_size": 63488 00:40:34.539 } 00:40:34.539 ] 00:40:34.539 } 00:40:34.539 } 00:40:34.539 }' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:40:34.539 pt2 00:40:34.539 pt3' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.539 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:34.539 [2024-11-26 17:37:35.219681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0a554d56-c507-4a31-95df-a5d36844ad37 '!=' 0a554d56-c507-4a31-95df-a5d36844ad37 ']' 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:34.799 [2024-11-26 17:37:35.263424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:34.799 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:34.799 "name": "raid_bdev1", 00:40:34.799 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:34.799 "strip_size_kb": 64, 00:40:34.799 "state": "online", 00:40:34.799 "raid_level": "raid5f", 00:40:34.799 "superblock": true, 00:40:34.799 "num_base_bdevs": 3, 00:40:34.799 "num_base_bdevs_discovered": 2, 00:40:34.799 "num_base_bdevs_operational": 2, 00:40:34.799 "base_bdevs_list": [ 00:40:34.799 { 00:40:34.799 "name": null, 00:40:34.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:34.799 "is_configured": false, 00:40:34.799 "data_offset": 0, 00:40:34.799 "data_size": 63488 00:40:34.799 }, 00:40:34.799 { 00:40:34.799 "name": "pt2", 00:40:34.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:34.799 "is_configured": true, 00:40:34.799 "data_offset": 2048, 00:40:34.799 "data_size": 63488 00:40:34.799 }, 00:40:34.799 { 00:40:34.799 "name": "pt3", 00:40:34.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:34.799 "is_configured": true, 00:40:34.799 "data_offset": 2048, 00:40:34.799 "data_size": 63488 00:40:34.799 } 00:40:34.799 ] 00:40:34.799 }' 00:40:34.800 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:34.800 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.058 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:35.058 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.058 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.058 [2024-11-26 17:37:35.702640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:35.058 [2024-11-26 17:37:35.702675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:35.058 [2024-11-26 17:37:35.702762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:35.059 [2024-11-26 17:37:35.702819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:35.059 [2024-11-26 17:37:35.702833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:40:35.059 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.059 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:35.059 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:40:35.059 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.059 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.059 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.318 [2024-11-26 17:37:35.790447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:40:35.318 [2024-11-26 17:37:35.790540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:35.318 [2024-11-26 17:37:35.790558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:40:35.318 [2024-11-26 17:37:35.790568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:35.318 [2024-11-26 17:37:35.792777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:35.318 [2024-11-26 17:37:35.792819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:40:35.318 [2024-11-26 17:37:35.792894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:40:35.318 [2024-11-26 17:37:35.792948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:35.318 pt2 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:35.318 "name": "raid_bdev1", 00:40:35.318 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:35.318 "strip_size_kb": 64, 00:40:35.318 "state": "configuring", 00:40:35.318 "raid_level": "raid5f", 00:40:35.318 "superblock": true, 00:40:35.318 "num_base_bdevs": 3, 00:40:35.318 "num_base_bdevs_discovered": 1, 00:40:35.318 "num_base_bdevs_operational": 2, 00:40:35.318 "base_bdevs_list": [ 00:40:35.318 { 00:40:35.318 "name": null, 00:40:35.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:35.318 "is_configured": false, 00:40:35.318 "data_offset": 2048, 00:40:35.318 "data_size": 63488 00:40:35.318 }, 00:40:35.318 { 00:40:35.318 "name": "pt2", 00:40:35.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:35.318 "is_configured": true, 00:40:35.318 "data_offset": 2048, 00:40:35.318 "data_size": 63488 00:40:35.318 }, 00:40:35.318 { 00:40:35.318 "name": null, 00:40:35.318 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:35.318 "is_configured": false, 00:40:35.318 "data_offset": 2048, 00:40:35.318 "data_size": 63488 00:40:35.318 } 00:40:35.318 ] 00:40:35.318 }' 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:35.318 17:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.578 [2024-11-26 17:37:36.257663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:40:35.578 [2024-11-26 17:37:36.257747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:35.578 [2024-11-26 17:37:36.257770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:40:35.578 [2024-11-26 17:37:36.257781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:35.578 [2024-11-26 17:37:36.258246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:35.578 [2024-11-26 17:37:36.258277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:40:35.578 [2024-11-26 17:37:36.258364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:40:35.578 [2024-11-26 17:37:36.258392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:40:35.578 [2024-11-26 17:37:36.258532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:40:35.578 [2024-11-26 17:37:36.258552] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:35.578 [2024-11-26 17:37:36.258810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:40:35.578 [2024-11-26 17:37:36.264482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:40:35.578 [2024-11-26 17:37:36.264505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:40:35.578 [2024-11-26 17:37:36.264858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:35.578 pt3 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:35.578 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:35.837 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:35.837 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:35.837 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:35.837 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:35.837 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:35.837 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:35.837 "name": "raid_bdev1", 00:40:35.837 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:35.837 "strip_size_kb": 64, 00:40:35.837 "state": "online", 00:40:35.837 "raid_level": "raid5f", 00:40:35.837 "superblock": true, 00:40:35.837 "num_base_bdevs": 3, 00:40:35.837 "num_base_bdevs_discovered": 2, 00:40:35.837 "num_base_bdevs_operational": 2, 00:40:35.837 "base_bdevs_list": [ 00:40:35.837 { 00:40:35.837 "name": null, 00:40:35.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:35.837 "is_configured": false, 00:40:35.837 "data_offset": 2048, 00:40:35.837 "data_size": 63488 00:40:35.837 }, 00:40:35.837 { 00:40:35.837 "name": "pt2", 00:40:35.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:35.837 "is_configured": true, 00:40:35.837 "data_offset": 2048, 00:40:35.837 "data_size": 63488 00:40:35.837 }, 00:40:35.837 { 00:40:35.837 "name": "pt3", 00:40:35.837 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:35.837 "is_configured": true, 00:40:35.837 "data_offset": 2048, 00:40:35.837 "data_size": 63488 00:40:35.837 } 00:40:35.837 ] 00:40:35.837 }' 00:40:35.837 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:35.837 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.095 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:36.095 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.095 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.095 [2024-11-26 17:37:36.687610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:36.095 [2024-11-26 17:37:36.687644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:36.095 [2024-11-26 17:37:36.687761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:36.096 [2024-11-26 17:37:36.687831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:36.096 [2024-11-26 17:37:36.687842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.096 [2024-11-26 17:37:36.743541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:40:36.096 [2024-11-26 17:37:36.743611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:36.096 [2024-11-26 17:37:36.743630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:40:36.096 [2024-11-26 17:37:36.743639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:36.096 [2024-11-26 17:37:36.746142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:36.096 [2024-11-26 17:37:36.746180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:40:36.096 [2024-11-26 17:37:36.746265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:40:36.096 [2024-11-26 17:37:36.746309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:40:36.096 [2024-11-26 17:37:36.746464] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:40:36.096 [2024-11-26 17:37:36.746489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:36.096 [2024-11-26 17:37:36.746506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:40:36.096 [2024-11-26 17:37:36.746596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:40:36.096 pt1 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.096 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.354 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:36.354 "name": "raid_bdev1", 00:40:36.354 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:36.354 "strip_size_kb": 64, 00:40:36.354 "state": "configuring", 00:40:36.354 "raid_level": "raid5f", 00:40:36.354 "superblock": true, 00:40:36.354 "num_base_bdevs": 3, 00:40:36.354 "num_base_bdevs_discovered": 1, 00:40:36.354 "num_base_bdevs_operational": 2, 00:40:36.354 "base_bdevs_list": [ 00:40:36.354 { 00:40:36.354 "name": null, 00:40:36.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:36.354 "is_configured": false, 00:40:36.354 "data_offset": 2048, 00:40:36.354 "data_size": 63488 00:40:36.354 }, 00:40:36.354 { 00:40:36.354 "name": "pt2", 00:40:36.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:36.355 "is_configured": true, 00:40:36.355 "data_offset": 2048, 00:40:36.355 "data_size": 63488 00:40:36.355 }, 00:40:36.355 { 00:40:36.355 "name": null, 00:40:36.355 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:36.355 "is_configured": false, 00:40:36.355 "data_offset": 2048, 00:40:36.355 "data_size": 63488 00:40:36.355 } 00:40:36.355 ] 00:40:36.355 }' 00:40:36.355 17:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:36.355 17:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.614 [2024-11-26 17:37:37.186822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:40:36.614 [2024-11-26 17:37:37.186895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:36.614 [2024-11-26 17:37:37.186917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:40:36.614 [2024-11-26 17:37:37.186928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:36.614 [2024-11-26 17:37:37.187490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:36.614 [2024-11-26 17:37:37.187534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:40:36.614 [2024-11-26 17:37:37.187643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:40:36.614 [2024-11-26 17:37:37.187677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:40:36.614 [2024-11-26 17:37:37.187836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:40:36.614 [2024-11-26 17:37:37.187854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:36.614 [2024-11-26 17:37:37.188167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:40:36.614 [2024-11-26 17:37:37.195169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:40:36.614 [2024-11-26 17:37:37.195204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:40:36.614 [2024-11-26 17:37:37.195480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:36.614 pt3 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:36.614 "name": "raid_bdev1", 00:40:36.614 "uuid": "0a554d56-c507-4a31-95df-a5d36844ad37", 00:40:36.614 "strip_size_kb": 64, 00:40:36.614 "state": "online", 00:40:36.614 "raid_level": "raid5f", 00:40:36.614 "superblock": true, 00:40:36.614 "num_base_bdevs": 3, 00:40:36.614 "num_base_bdevs_discovered": 2, 00:40:36.614 "num_base_bdevs_operational": 2, 00:40:36.614 "base_bdevs_list": [ 00:40:36.614 { 00:40:36.614 "name": null, 00:40:36.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:36.614 "is_configured": false, 00:40:36.614 "data_offset": 2048, 00:40:36.614 "data_size": 63488 00:40:36.614 }, 00:40:36.614 { 00:40:36.614 "name": "pt2", 00:40:36.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:40:36.614 "is_configured": true, 00:40:36.614 "data_offset": 2048, 00:40:36.614 "data_size": 63488 00:40:36.614 }, 00:40:36.614 { 00:40:36.614 "name": "pt3", 00:40:36.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:40:36.614 "is_configured": true, 00:40:36.614 "data_offset": 2048, 00:40:36.614 "data_size": 63488 00:40:36.614 } 00:40:36.614 ] 00:40:36.614 }' 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:36.614 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:37.183 [2024-11-26 17:37:37.690591] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0a554d56-c507-4a31-95df-a5d36844ad37 '!=' 0a554d56-c507-4a31-95df-a5d36844ad37 ']' 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81436 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81436 ']' 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81436 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81436 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:37.183 killing process with pid 81436 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81436' 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81436 00:40:37.183 [2024-11-26 17:37:37.760889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:37.183 [2024-11-26 17:37:37.761004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:37.183 17:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81436 00:40:37.183 [2024-11-26 17:37:37.761089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:37.183 [2024-11-26 17:37:37.761102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:40:37.442 [2024-11-26 17:37:38.060727] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:38.823 17:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:40:38.823 00:40:38.823 real 0m7.574s 00:40:38.823 user 0m11.773s 00:40:38.823 sys 0m1.403s 00:40:38.823 17:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:38.823 17:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.823 ************************************ 00:40:38.823 END TEST raid5f_superblock_test 00:40:38.823 ************************************ 00:40:38.823 17:37:39 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:40:38.823 17:37:39 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:40:38.823 17:37:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:40:38.823 17:37:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:38.823 17:37:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:38.823 ************************************ 00:40:38.823 START TEST raid5f_rebuild_test 00:40:38.823 ************************************ 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81874 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81874 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81874 ']' 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:38.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:38.823 17:37:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:38.823 [2024-11-26 17:37:39.366874] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:40:38.823 [2024-11-26 17:37:39.367102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:40:38.823 Zero copy mechanism will not be used. 00:40:38.823 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81874 ] 00:40:39.082 [2024-11-26 17:37:39.539391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.082 [2024-11-26 17:37:39.654972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:39.339 [2024-11-26 17:37:39.855464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:39.339 [2024-11-26 17:37:39.855545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.598 BaseBdev1_malloc 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.598 [2024-11-26 17:37:40.247945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:39.598 [2024-11-26 17:37:40.248054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:39.598 [2024-11-26 17:37:40.248094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:40:39.598 [2024-11-26 17:37:40.248126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:39.598 [2024-11-26 17:37:40.250206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:39.598 [2024-11-26 17:37:40.250302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:39.598 BaseBdev1 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.598 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.857 BaseBdev2_malloc 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.857 [2024-11-26 17:37:40.303125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:39.857 [2024-11-26 17:37:40.303235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:39.857 [2024-11-26 17:37:40.303275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:40:39.857 [2024-11-26 17:37:40.303308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:39.857 [2024-11-26 17:37:40.305597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:39.857 [2024-11-26 17:37:40.305675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:39.857 BaseBdev2 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.857 BaseBdev3_malloc 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.857 [2024-11-26 17:37:40.373344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:40:39.857 [2024-11-26 17:37:40.373465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:39.857 [2024-11-26 17:37:40.373493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:40:39.857 [2024-11-26 17:37:40.373504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:39.857 [2024-11-26 17:37:40.375677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:39.857 [2024-11-26 17:37:40.375721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:40:39.857 BaseBdev3 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.857 spare_malloc 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.857 spare_delay 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.857 [2024-11-26 17:37:40.441438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:39.857 [2024-11-26 17:37:40.441561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:39.857 [2024-11-26 17:37:40.441599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:40:39.857 [2024-11-26 17:37:40.441646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:39.857 [2024-11-26 17:37:40.443796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:39.857 [2024-11-26 17:37:40.443874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:39.857 spare 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.857 [2024-11-26 17:37:40.453484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:39.857 [2024-11-26 17:37:40.455229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:39.857 [2024-11-26 17:37:40.455289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:39.857 [2024-11-26 17:37:40.455370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:40:39.857 [2024-11-26 17:37:40.455381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:40:39.857 [2024-11-26 17:37:40.455699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:40:39.857 [2024-11-26 17:37:40.461652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:40:39.857 [2024-11-26 17:37:40.461723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:40:39.857 [2024-11-26 17:37:40.461970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.857 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:39.857 "name": "raid_bdev1", 00:40:39.857 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:39.857 "strip_size_kb": 64, 00:40:39.857 "state": "online", 00:40:39.857 "raid_level": "raid5f", 00:40:39.857 "superblock": false, 00:40:39.857 "num_base_bdevs": 3, 00:40:39.857 "num_base_bdevs_discovered": 3, 00:40:39.857 "num_base_bdevs_operational": 3, 00:40:39.857 "base_bdevs_list": [ 00:40:39.857 { 00:40:39.857 "name": "BaseBdev1", 00:40:39.857 "uuid": "41923bef-044c-53d2-a807-59ead55efeb0", 00:40:39.857 "is_configured": true, 00:40:39.857 "data_offset": 0, 00:40:39.857 "data_size": 65536 00:40:39.857 }, 00:40:39.857 { 00:40:39.857 "name": "BaseBdev2", 00:40:39.858 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:39.858 "is_configured": true, 00:40:39.858 "data_offset": 0, 00:40:39.858 "data_size": 65536 00:40:39.858 }, 00:40:39.858 { 00:40:39.858 "name": "BaseBdev3", 00:40:39.858 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:39.858 "is_configured": true, 00:40:39.858 "data_offset": 0, 00:40:39.858 "data_size": 65536 00:40:39.858 } 00:40:39.858 ] 00:40:39.858 }' 00:40:39.858 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:39.858 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.429 [2024-11-26 17:37:40.896129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:40.429 17:37:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:40:40.701 [2024-11-26 17:37:41.151567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:40:40.701 /dev/nbd0 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:40.701 1+0 records in 00:40:40.701 1+0 records out 00:40:40.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388214 s, 10.6 MB/s 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:40:40.701 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:40:41.269 512+0 records in 00:40:41.269 512+0 records out 00:40:41.269 67108864 bytes (67 MB, 64 MiB) copied, 0.429181 s, 156 MB/s 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:41.269 [2024-11-26 17:37:41.886855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.269 [2024-11-26 17:37:41.903674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:41.269 "name": "raid_bdev1", 00:40:41.269 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:41.269 "strip_size_kb": 64, 00:40:41.269 "state": "online", 00:40:41.269 "raid_level": "raid5f", 00:40:41.269 "superblock": false, 00:40:41.269 "num_base_bdevs": 3, 00:40:41.269 "num_base_bdevs_discovered": 2, 00:40:41.269 "num_base_bdevs_operational": 2, 00:40:41.269 "base_bdevs_list": [ 00:40:41.269 { 00:40:41.269 "name": null, 00:40:41.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:41.269 "is_configured": false, 00:40:41.269 "data_offset": 0, 00:40:41.269 "data_size": 65536 00:40:41.269 }, 00:40:41.269 { 00:40:41.269 "name": "BaseBdev2", 00:40:41.269 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:41.269 "is_configured": true, 00:40:41.269 "data_offset": 0, 00:40:41.269 "data_size": 65536 00:40:41.269 }, 00:40:41.269 { 00:40:41.269 "name": "BaseBdev3", 00:40:41.269 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:41.269 "is_configured": true, 00:40:41.269 "data_offset": 0, 00:40:41.269 "data_size": 65536 00:40:41.269 } 00:40:41.269 ] 00:40:41.269 }' 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:41.269 17:37:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.837 17:37:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:41.837 17:37:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:41.837 17:37:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:41.837 [2024-11-26 17:37:42.318940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:41.837 [2024-11-26 17:37:42.336567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:40:41.837 17:37:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:41.837 17:37:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:40:41.837 [2024-11-26 17:37:42.344867] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:42.773 "name": "raid_bdev1", 00:40:42.773 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:42.773 "strip_size_kb": 64, 00:40:42.773 "state": "online", 00:40:42.773 "raid_level": "raid5f", 00:40:42.773 "superblock": false, 00:40:42.773 "num_base_bdevs": 3, 00:40:42.773 "num_base_bdevs_discovered": 3, 00:40:42.773 "num_base_bdevs_operational": 3, 00:40:42.773 "process": { 00:40:42.773 "type": "rebuild", 00:40:42.773 "target": "spare", 00:40:42.773 "progress": { 00:40:42.773 "blocks": 18432, 00:40:42.773 "percent": 14 00:40:42.773 } 00:40:42.773 }, 00:40:42.773 "base_bdevs_list": [ 00:40:42.773 { 00:40:42.773 "name": "spare", 00:40:42.773 "uuid": "1d2aa21a-2ee1-574a-bc8f-dd1d41b2c514", 00:40:42.773 "is_configured": true, 00:40:42.773 "data_offset": 0, 00:40:42.773 "data_size": 65536 00:40:42.773 }, 00:40:42.773 { 00:40:42.773 "name": "BaseBdev2", 00:40:42.773 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:42.773 "is_configured": true, 00:40:42.773 "data_offset": 0, 00:40:42.773 "data_size": 65536 00:40:42.773 }, 00:40:42.773 { 00:40:42.773 "name": "BaseBdev3", 00:40:42.773 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:42.773 "is_configured": true, 00:40:42.773 "data_offset": 0, 00:40:42.773 "data_size": 65536 00:40:42.773 } 00:40:42.773 ] 00:40:42.773 }' 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:42.773 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.033 [2024-11-26 17:37:43.496660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:43.033 [2024-11-26 17:37:43.559046] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:43.033 [2024-11-26 17:37:43.559124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:43.033 [2024-11-26 17:37:43.559147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:43.033 [2024-11-26 17:37:43.559156] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:43.033 "name": "raid_bdev1", 00:40:43.033 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:43.033 "strip_size_kb": 64, 00:40:43.033 "state": "online", 00:40:43.033 "raid_level": "raid5f", 00:40:43.033 "superblock": false, 00:40:43.033 "num_base_bdevs": 3, 00:40:43.033 "num_base_bdevs_discovered": 2, 00:40:43.033 "num_base_bdevs_operational": 2, 00:40:43.033 "base_bdevs_list": [ 00:40:43.033 { 00:40:43.033 "name": null, 00:40:43.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:43.033 "is_configured": false, 00:40:43.033 "data_offset": 0, 00:40:43.033 "data_size": 65536 00:40:43.033 }, 00:40:43.033 { 00:40:43.033 "name": "BaseBdev2", 00:40:43.033 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:43.033 "is_configured": true, 00:40:43.033 "data_offset": 0, 00:40:43.033 "data_size": 65536 00:40:43.033 }, 00:40:43.033 { 00:40:43.033 "name": "BaseBdev3", 00:40:43.033 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:43.033 "is_configured": true, 00:40:43.033 "data_offset": 0, 00:40:43.033 "data_size": 65536 00:40:43.033 } 00:40:43.033 ] 00:40:43.033 }' 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:43.033 17:37:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:43.603 "name": "raid_bdev1", 00:40:43.603 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:43.603 "strip_size_kb": 64, 00:40:43.603 "state": "online", 00:40:43.603 "raid_level": "raid5f", 00:40:43.603 "superblock": false, 00:40:43.603 "num_base_bdevs": 3, 00:40:43.603 "num_base_bdevs_discovered": 2, 00:40:43.603 "num_base_bdevs_operational": 2, 00:40:43.603 "base_bdevs_list": [ 00:40:43.603 { 00:40:43.603 "name": null, 00:40:43.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:43.603 "is_configured": false, 00:40:43.603 "data_offset": 0, 00:40:43.603 "data_size": 65536 00:40:43.603 }, 00:40:43.603 { 00:40:43.603 "name": "BaseBdev2", 00:40:43.603 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:43.603 "is_configured": true, 00:40:43.603 "data_offset": 0, 00:40:43.603 "data_size": 65536 00:40:43.603 }, 00:40:43.603 { 00:40:43.603 "name": "BaseBdev3", 00:40:43.603 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:43.603 "is_configured": true, 00:40:43.603 "data_offset": 0, 00:40:43.603 "data_size": 65536 00:40:43.603 } 00:40:43.603 ] 00:40:43.603 }' 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:43.603 [2024-11-26 17:37:44.137196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:43.603 [2024-11-26 17:37:44.154379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.603 17:37:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:40:43.603 [2024-11-26 17:37:44.162282] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:44.540 "name": "raid_bdev1", 00:40:44.540 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:44.540 "strip_size_kb": 64, 00:40:44.540 "state": "online", 00:40:44.540 "raid_level": "raid5f", 00:40:44.540 "superblock": false, 00:40:44.540 "num_base_bdevs": 3, 00:40:44.540 "num_base_bdevs_discovered": 3, 00:40:44.540 "num_base_bdevs_operational": 3, 00:40:44.540 "process": { 00:40:44.540 "type": "rebuild", 00:40:44.540 "target": "spare", 00:40:44.540 "progress": { 00:40:44.540 "blocks": 20480, 00:40:44.540 "percent": 15 00:40:44.540 } 00:40:44.540 }, 00:40:44.540 "base_bdevs_list": [ 00:40:44.540 { 00:40:44.540 "name": "spare", 00:40:44.540 "uuid": "1d2aa21a-2ee1-574a-bc8f-dd1d41b2c514", 00:40:44.540 "is_configured": true, 00:40:44.540 "data_offset": 0, 00:40:44.540 "data_size": 65536 00:40:44.540 }, 00:40:44.540 { 00:40:44.540 "name": "BaseBdev2", 00:40:44.540 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:44.540 "is_configured": true, 00:40:44.540 "data_offset": 0, 00:40:44.540 "data_size": 65536 00:40:44.540 }, 00:40:44.540 { 00:40:44.540 "name": "BaseBdev3", 00:40:44.540 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:44.540 "is_configured": true, 00:40:44.540 "data_offset": 0, 00:40:44.540 "data_size": 65536 00:40:44.540 } 00:40:44.540 ] 00:40:44.540 }' 00:40:44.540 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=560 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.799 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:44.799 "name": "raid_bdev1", 00:40:44.799 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:44.799 "strip_size_kb": 64, 00:40:44.799 "state": "online", 00:40:44.799 "raid_level": "raid5f", 00:40:44.799 "superblock": false, 00:40:44.799 "num_base_bdevs": 3, 00:40:44.799 "num_base_bdevs_discovered": 3, 00:40:44.799 "num_base_bdevs_operational": 3, 00:40:44.799 "process": { 00:40:44.799 "type": "rebuild", 00:40:44.799 "target": "spare", 00:40:44.799 "progress": { 00:40:44.799 "blocks": 22528, 00:40:44.799 "percent": 17 00:40:44.799 } 00:40:44.799 }, 00:40:44.799 "base_bdevs_list": [ 00:40:44.799 { 00:40:44.799 "name": "spare", 00:40:44.799 "uuid": "1d2aa21a-2ee1-574a-bc8f-dd1d41b2c514", 00:40:44.799 "is_configured": true, 00:40:44.799 "data_offset": 0, 00:40:44.799 "data_size": 65536 00:40:44.799 }, 00:40:44.799 { 00:40:44.799 "name": "BaseBdev2", 00:40:44.799 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:44.799 "is_configured": true, 00:40:44.799 "data_offset": 0, 00:40:44.799 "data_size": 65536 00:40:44.799 }, 00:40:44.800 { 00:40:44.800 "name": "BaseBdev3", 00:40:44.800 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:44.800 "is_configured": true, 00:40:44.800 "data_offset": 0, 00:40:44.800 "data_size": 65536 00:40:44.800 } 00:40:44.800 ] 00:40:44.800 }' 00:40:44.800 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:44.800 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:44.800 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:44.800 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:44.800 17:37:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:46.181 "name": "raid_bdev1", 00:40:46.181 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:46.181 "strip_size_kb": 64, 00:40:46.181 "state": "online", 00:40:46.181 "raid_level": "raid5f", 00:40:46.181 "superblock": false, 00:40:46.181 "num_base_bdevs": 3, 00:40:46.181 "num_base_bdevs_discovered": 3, 00:40:46.181 "num_base_bdevs_operational": 3, 00:40:46.181 "process": { 00:40:46.181 "type": "rebuild", 00:40:46.181 "target": "spare", 00:40:46.181 "progress": { 00:40:46.181 "blocks": 45056, 00:40:46.181 "percent": 34 00:40:46.181 } 00:40:46.181 }, 00:40:46.181 "base_bdevs_list": [ 00:40:46.181 { 00:40:46.181 "name": "spare", 00:40:46.181 "uuid": "1d2aa21a-2ee1-574a-bc8f-dd1d41b2c514", 00:40:46.181 "is_configured": true, 00:40:46.181 "data_offset": 0, 00:40:46.181 "data_size": 65536 00:40:46.181 }, 00:40:46.181 { 00:40:46.181 "name": "BaseBdev2", 00:40:46.181 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:46.181 "is_configured": true, 00:40:46.181 "data_offset": 0, 00:40:46.181 "data_size": 65536 00:40:46.181 }, 00:40:46.181 { 00:40:46.181 "name": "BaseBdev3", 00:40:46.181 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:46.181 "is_configured": true, 00:40:46.181 "data_offset": 0, 00:40:46.181 "data_size": 65536 00:40:46.181 } 00:40:46.181 ] 00:40:46.181 }' 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:46.181 17:37:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:47.121 "name": "raid_bdev1", 00:40:47.121 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:47.121 "strip_size_kb": 64, 00:40:47.121 "state": "online", 00:40:47.121 "raid_level": "raid5f", 00:40:47.121 "superblock": false, 00:40:47.121 "num_base_bdevs": 3, 00:40:47.121 "num_base_bdevs_discovered": 3, 00:40:47.121 "num_base_bdevs_operational": 3, 00:40:47.121 "process": { 00:40:47.121 "type": "rebuild", 00:40:47.121 "target": "spare", 00:40:47.121 "progress": { 00:40:47.121 "blocks": 69632, 00:40:47.121 "percent": 53 00:40:47.121 } 00:40:47.121 }, 00:40:47.121 "base_bdevs_list": [ 00:40:47.121 { 00:40:47.121 "name": "spare", 00:40:47.121 "uuid": "1d2aa21a-2ee1-574a-bc8f-dd1d41b2c514", 00:40:47.121 "is_configured": true, 00:40:47.121 "data_offset": 0, 00:40:47.121 "data_size": 65536 00:40:47.121 }, 00:40:47.121 { 00:40:47.121 "name": "BaseBdev2", 00:40:47.121 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:47.121 "is_configured": true, 00:40:47.121 "data_offset": 0, 00:40:47.121 "data_size": 65536 00:40:47.121 }, 00:40:47.121 { 00:40:47.121 "name": "BaseBdev3", 00:40:47.121 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:47.121 "is_configured": true, 00:40:47.121 "data_offset": 0, 00:40:47.121 "data_size": 65536 00:40:47.121 } 00:40:47.121 ] 00:40:47.121 }' 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:47.121 17:37:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:48.060 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:48.060 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:48.060 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:48.060 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:48.060 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:48.060 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:48.060 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:48.060 17:37:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.060 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:48.060 17:37:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:48.320 17:37:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.320 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:48.320 "name": "raid_bdev1", 00:40:48.320 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:48.320 "strip_size_kb": 64, 00:40:48.320 "state": "online", 00:40:48.320 "raid_level": "raid5f", 00:40:48.320 "superblock": false, 00:40:48.320 "num_base_bdevs": 3, 00:40:48.320 "num_base_bdevs_discovered": 3, 00:40:48.320 "num_base_bdevs_operational": 3, 00:40:48.320 "process": { 00:40:48.320 "type": "rebuild", 00:40:48.320 "target": "spare", 00:40:48.320 "progress": { 00:40:48.320 "blocks": 92160, 00:40:48.320 "percent": 70 00:40:48.320 } 00:40:48.320 }, 00:40:48.320 "base_bdevs_list": [ 00:40:48.320 { 00:40:48.320 "name": "spare", 00:40:48.320 "uuid": "1d2aa21a-2ee1-574a-bc8f-dd1d41b2c514", 00:40:48.320 "is_configured": true, 00:40:48.320 "data_offset": 0, 00:40:48.320 "data_size": 65536 00:40:48.320 }, 00:40:48.320 { 00:40:48.320 "name": "BaseBdev2", 00:40:48.320 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:48.320 "is_configured": true, 00:40:48.320 "data_offset": 0, 00:40:48.320 "data_size": 65536 00:40:48.320 }, 00:40:48.320 { 00:40:48.320 "name": "BaseBdev3", 00:40:48.320 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:48.320 "is_configured": true, 00:40:48.320 "data_offset": 0, 00:40:48.320 "data_size": 65536 00:40:48.320 } 00:40:48.320 ] 00:40:48.320 }' 00:40:48.320 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:48.320 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:48.320 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:48.320 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:48.320 17:37:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:49.259 "name": "raid_bdev1", 00:40:49.259 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:49.259 "strip_size_kb": 64, 00:40:49.259 "state": "online", 00:40:49.259 "raid_level": "raid5f", 00:40:49.259 "superblock": false, 00:40:49.259 "num_base_bdevs": 3, 00:40:49.259 "num_base_bdevs_discovered": 3, 00:40:49.259 "num_base_bdevs_operational": 3, 00:40:49.259 "process": { 00:40:49.259 "type": "rebuild", 00:40:49.259 "target": "spare", 00:40:49.259 "progress": { 00:40:49.259 "blocks": 114688, 00:40:49.259 "percent": 87 00:40:49.259 } 00:40:49.259 }, 00:40:49.259 "base_bdevs_list": [ 00:40:49.259 { 00:40:49.259 "name": "spare", 00:40:49.259 "uuid": "1d2aa21a-2ee1-574a-bc8f-dd1d41b2c514", 00:40:49.259 "is_configured": true, 00:40:49.259 "data_offset": 0, 00:40:49.259 "data_size": 65536 00:40:49.259 }, 00:40:49.259 { 00:40:49.259 "name": "BaseBdev2", 00:40:49.259 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:49.259 "is_configured": true, 00:40:49.259 "data_offset": 0, 00:40:49.259 "data_size": 65536 00:40:49.259 }, 00:40:49.259 { 00:40:49.259 "name": "BaseBdev3", 00:40:49.259 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:49.259 "is_configured": true, 00:40:49.259 "data_offset": 0, 00:40:49.259 "data_size": 65536 00:40:49.259 } 00:40:49.259 ] 00:40:49.259 }' 00:40:49.259 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:49.519 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:49.519 17:37:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:49.519 17:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:49.519 17:37:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:40:50.095 [2024-11-26 17:37:50.632397] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:50.095 [2024-11-26 17:37:50.632547] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:50.095 [2024-11-26 17:37:50.632605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:50.362 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:50.623 "name": "raid_bdev1", 00:40:50.623 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:50.623 "strip_size_kb": 64, 00:40:50.623 "state": "online", 00:40:50.623 "raid_level": "raid5f", 00:40:50.623 "superblock": false, 00:40:50.623 "num_base_bdevs": 3, 00:40:50.623 "num_base_bdevs_discovered": 3, 00:40:50.623 "num_base_bdevs_operational": 3, 00:40:50.623 "base_bdevs_list": [ 00:40:50.623 { 00:40:50.623 "name": "spare", 00:40:50.623 "uuid": "1d2aa21a-2ee1-574a-bc8f-dd1d41b2c514", 00:40:50.623 "is_configured": true, 00:40:50.623 "data_offset": 0, 00:40:50.623 "data_size": 65536 00:40:50.623 }, 00:40:50.623 { 00:40:50.623 "name": "BaseBdev2", 00:40:50.623 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:50.623 "is_configured": true, 00:40:50.623 "data_offset": 0, 00:40:50.623 "data_size": 65536 00:40:50.623 }, 00:40:50.623 { 00:40:50.623 "name": "BaseBdev3", 00:40:50.623 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:50.623 "is_configured": true, 00:40:50.623 "data_offset": 0, 00:40:50.623 "data_size": 65536 00:40:50.623 } 00:40:50.623 ] 00:40:50.623 }' 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:50.623 "name": "raid_bdev1", 00:40:50.623 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:50.623 "strip_size_kb": 64, 00:40:50.623 "state": "online", 00:40:50.623 "raid_level": "raid5f", 00:40:50.623 "superblock": false, 00:40:50.623 "num_base_bdevs": 3, 00:40:50.623 "num_base_bdevs_discovered": 3, 00:40:50.623 "num_base_bdevs_operational": 3, 00:40:50.623 "base_bdevs_list": [ 00:40:50.623 { 00:40:50.623 "name": "spare", 00:40:50.623 "uuid": "1d2aa21a-2ee1-574a-bc8f-dd1d41b2c514", 00:40:50.623 "is_configured": true, 00:40:50.623 "data_offset": 0, 00:40:50.623 "data_size": 65536 00:40:50.623 }, 00:40:50.623 { 00:40:50.623 "name": "BaseBdev2", 00:40:50.623 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:50.623 "is_configured": true, 00:40:50.623 "data_offset": 0, 00:40:50.623 "data_size": 65536 00:40:50.623 }, 00:40:50.623 { 00:40:50.623 "name": "BaseBdev3", 00:40:50.623 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:50.623 "is_configured": true, 00:40:50.623 "data_offset": 0, 00:40:50.623 "data_size": 65536 00:40:50.623 } 00:40:50.623 ] 00:40:50.623 }' 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.623 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:50.623 "name": "raid_bdev1", 00:40:50.624 "uuid": "0f926446-f39b-4762-a3b0-967080d2d87f", 00:40:50.624 "strip_size_kb": 64, 00:40:50.624 "state": "online", 00:40:50.624 "raid_level": "raid5f", 00:40:50.624 "superblock": false, 00:40:50.624 "num_base_bdevs": 3, 00:40:50.624 "num_base_bdevs_discovered": 3, 00:40:50.624 "num_base_bdevs_operational": 3, 00:40:50.624 "base_bdevs_list": [ 00:40:50.624 { 00:40:50.624 "name": "spare", 00:40:50.624 "uuid": "1d2aa21a-2ee1-574a-bc8f-dd1d41b2c514", 00:40:50.624 "is_configured": true, 00:40:50.624 "data_offset": 0, 00:40:50.624 "data_size": 65536 00:40:50.624 }, 00:40:50.624 { 00:40:50.624 "name": "BaseBdev2", 00:40:50.624 "uuid": "a516aecb-fa88-518a-9993-202418c977b3", 00:40:50.624 "is_configured": true, 00:40:50.624 "data_offset": 0, 00:40:50.624 "data_size": 65536 00:40:50.624 }, 00:40:50.624 { 00:40:50.624 "name": "BaseBdev3", 00:40:50.624 "uuid": "ca304675-4725-5333-bd6e-5b29825dd904", 00:40:50.624 "is_configured": true, 00:40:50.624 "data_offset": 0, 00:40:50.624 "data_size": 65536 00:40:50.624 } 00:40:50.624 ] 00:40:50.624 }' 00:40:50.624 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:50.624 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:51.195 [2024-11-26 17:37:51.655034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:51.195 [2024-11-26 17:37:51.655088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:51.195 [2024-11-26 17:37:51.655203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:51.195 [2024-11-26 17:37:51.655295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:51.195 [2024-11-26 17:37:51.655314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:51.195 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:40:51.456 /dev/nbd0 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:51.456 1+0 records in 00:40:51.456 1+0 records out 00:40:51.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450659 s, 9.1 MB/s 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:51.456 17:37:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:40:51.716 /dev/nbd1 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:51.716 1+0 records in 00:40:51.716 1+0 records out 00:40:51.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456022 s, 9.0 MB/s 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:51.716 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:40:51.717 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:51.717 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:51.717 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:51.977 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:40:52.235 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:52.235 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:52.235 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:52.235 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:52.235 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:52.236 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:52.236 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:40:52.236 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:40:52.236 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:40:52.236 17:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81874 00:40:52.236 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81874 ']' 00:40:52.236 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81874 00:40:52.236 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:40:52.236 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:52.236 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81874 00:40:52.495 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:52.495 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:52.495 killing process with pid 81874 00:40:52.495 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81874' 00:40:52.495 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81874 00:40:52.495 Received shutdown signal, test time was about 60.000000 seconds 00:40:52.495 00:40:52.495 Latency(us) 00:40:52.495 [2024-11-26T17:37:53.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:52.495 [2024-11-26T17:37:53.190Z] =================================================================================================================== 00:40:52.495 [2024-11-26T17:37:53.190Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:52.495 [2024-11-26 17:37:52.947341] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:52.495 17:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81874 00:40:52.755 [2024-11-26 17:37:53.394934] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:40:54.137 00:40:54.137 real 0m15.282s 00:40:54.137 user 0m18.582s 00:40:54.137 sys 0m2.053s 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:40:54.137 ************************************ 00:40:54.137 END TEST raid5f_rebuild_test 00:40:54.137 ************************************ 00:40:54.137 17:37:54 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:40:54.137 17:37:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:40:54.137 17:37:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:54.137 17:37:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:40:54.137 ************************************ 00:40:54.137 START TEST raid5f_rebuild_test_sb 00:40:54.137 ************************************ 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82314 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82314 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82314 ']' 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:54.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:54.137 17:37:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.137 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:54.137 Zero copy mechanism will not be used. 00:40:54.137 [2024-11-26 17:37:54.725550] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:40:54.137 [2024-11-26 17:37:54.725674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82314 ] 00:40:54.399 [2024-11-26 17:37:54.902964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.399 [2024-11-26 17:37:55.046752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.661 [2024-11-26 17:37:55.277531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:54.661 [2024-11-26 17:37:55.277632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.921 BaseBdev1_malloc 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:54.921 [2024-11-26 17:37:55.605323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:54.921 [2024-11-26 17:37:55.605401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:54.921 [2024-11-26 17:37:55.605424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:40:54.921 [2024-11-26 17:37:55.605436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:54.921 [2024-11-26 17:37:55.607778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:54.921 [2024-11-26 17:37:55.607814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:54.921 BaseBdev1 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:54.921 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.182 BaseBdev2_malloc 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.182 [2024-11-26 17:37:55.666118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:55.182 [2024-11-26 17:37:55.666180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:55.182 [2024-11-26 17:37:55.666203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:40:55.182 [2024-11-26 17:37:55.666215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:55.182 [2024-11-26 17:37:55.668584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:55.182 [2024-11-26 17:37:55.668617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:55.182 BaseBdev2 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.182 BaseBdev3_malloc 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.182 [2024-11-26 17:37:55.740729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:40:55.182 [2024-11-26 17:37:55.740786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:55.182 [2024-11-26 17:37:55.740808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:40:55.182 [2024-11-26 17:37:55.740820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:55.182 [2024-11-26 17:37:55.743148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:55.182 [2024-11-26 17:37:55.743185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:40:55.182 BaseBdev3 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.182 spare_malloc 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.182 spare_delay 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.182 [2024-11-26 17:37:55.815216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:55.182 [2024-11-26 17:37:55.815275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:55.182 [2024-11-26 17:37:55.815292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:40:55.182 [2024-11-26 17:37:55.815303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:55.182 [2024-11-26 17:37:55.817739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:55.182 [2024-11-26 17:37:55.817776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:55.182 spare 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.182 [2024-11-26 17:37:55.827278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:55.182 [2024-11-26 17:37:55.829366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:55.182 [2024-11-26 17:37:55.829434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:40:55.182 [2024-11-26 17:37:55.829640] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:40:55.182 [2024-11-26 17:37:55.829654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:40:55.182 [2024-11-26 17:37:55.829902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:40:55.182 [2024-11-26 17:37:55.835630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:40:55.182 [2024-11-26 17:37:55.835659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:40:55.182 [2024-11-26 17:37:55.835856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.182 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.442 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:55.442 "name": "raid_bdev1", 00:40:55.442 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:40:55.442 "strip_size_kb": 64, 00:40:55.442 "state": "online", 00:40:55.442 "raid_level": "raid5f", 00:40:55.442 "superblock": true, 00:40:55.442 "num_base_bdevs": 3, 00:40:55.442 "num_base_bdevs_discovered": 3, 00:40:55.442 "num_base_bdevs_operational": 3, 00:40:55.442 "base_bdevs_list": [ 00:40:55.442 { 00:40:55.442 "name": "BaseBdev1", 00:40:55.442 "uuid": "d1e281de-7065-52b5-9347-9c2fd6d5b291", 00:40:55.442 "is_configured": true, 00:40:55.442 "data_offset": 2048, 00:40:55.442 "data_size": 63488 00:40:55.442 }, 00:40:55.442 { 00:40:55.442 "name": "BaseBdev2", 00:40:55.442 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:40:55.442 "is_configured": true, 00:40:55.442 "data_offset": 2048, 00:40:55.442 "data_size": 63488 00:40:55.442 }, 00:40:55.442 { 00:40:55.442 "name": "BaseBdev3", 00:40:55.442 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:40:55.442 "is_configured": true, 00:40:55.442 "data_offset": 2048, 00:40:55.442 "data_size": 63488 00:40:55.442 } 00:40:55.442 ] 00:40:55.442 }' 00:40:55.442 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:55.442 17:37:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.701 [2024-11-26 17:37:56.286781] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:55.701 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:40:55.959 [2024-11-26 17:37:56.554174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:40:55.959 /dev/nbd0 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:55.959 1+0 records in 00:40:55.959 1+0 records out 00:40:55.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042483 s, 9.6 MB/s 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:40:55.959 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:40:56.528 496+0 records in 00:40:56.528 496+0 records out 00:40:56.528 65011712 bytes (65 MB, 62 MiB) copied, 0.321723 s, 202 MB/s 00:40:56.529 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:40:56.529 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:40:56.529 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:56.529 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:56.529 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:40:56.529 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:56.529 17:37:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:56.789 [2024-11-26 17:37:57.225696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.789 [2024-11-26 17:37:57.242070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:56.789 "name": "raid_bdev1", 00:40:56.789 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:40:56.789 "strip_size_kb": 64, 00:40:56.789 "state": "online", 00:40:56.789 "raid_level": "raid5f", 00:40:56.789 "superblock": true, 00:40:56.789 "num_base_bdevs": 3, 00:40:56.789 "num_base_bdevs_discovered": 2, 00:40:56.789 "num_base_bdevs_operational": 2, 00:40:56.789 "base_bdevs_list": [ 00:40:56.789 { 00:40:56.789 "name": null, 00:40:56.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:56.789 "is_configured": false, 00:40:56.789 "data_offset": 0, 00:40:56.789 "data_size": 63488 00:40:56.789 }, 00:40:56.789 { 00:40:56.789 "name": "BaseBdev2", 00:40:56.789 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:40:56.789 "is_configured": true, 00:40:56.789 "data_offset": 2048, 00:40:56.789 "data_size": 63488 00:40:56.789 }, 00:40:56.789 { 00:40:56.789 "name": "BaseBdev3", 00:40:56.789 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:40:56.789 "is_configured": true, 00:40:56.789 "data_offset": 2048, 00:40:56.789 "data_size": 63488 00:40:56.789 } 00:40:56.789 ] 00:40:56.789 }' 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:56.789 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:57.049 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:57.049 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.049 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:57.049 [2024-11-26 17:37:57.677396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:57.049 [2024-11-26 17:37:57.693458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:40:57.049 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.049 17:37:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:40:57.049 [2024-11-26 17:37:57.701304] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:58.432 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:58.432 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:58.432 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:58.432 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:58.433 "name": "raid_bdev1", 00:40:58.433 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:40:58.433 "strip_size_kb": 64, 00:40:58.433 "state": "online", 00:40:58.433 "raid_level": "raid5f", 00:40:58.433 "superblock": true, 00:40:58.433 "num_base_bdevs": 3, 00:40:58.433 "num_base_bdevs_discovered": 3, 00:40:58.433 "num_base_bdevs_operational": 3, 00:40:58.433 "process": { 00:40:58.433 "type": "rebuild", 00:40:58.433 "target": "spare", 00:40:58.433 "progress": { 00:40:58.433 "blocks": 20480, 00:40:58.433 "percent": 16 00:40:58.433 } 00:40:58.433 }, 00:40:58.433 "base_bdevs_list": [ 00:40:58.433 { 00:40:58.433 "name": "spare", 00:40:58.433 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:40:58.433 "is_configured": true, 00:40:58.433 "data_offset": 2048, 00:40:58.433 "data_size": 63488 00:40:58.433 }, 00:40:58.433 { 00:40:58.433 "name": "BaseBdev2", 00:40:58.433 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:40:58.433 "is_configured": true, 00:40:58.433 "data_offset": 2048, 00:40:58.433 "data_size": 63488 00:40:58.433 }, 00:40:58.433 { 00:40:58.433 "name": "BaseBdev3", 00:40:58.433 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:40:58.433 "is_configured": true, 00:40:58.433 "data_offset": 2048, 00:40:58.433 "data_size": 63488 00:40:58.433 } 00:40:58.433 ] 00:40:58.433 }' 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:58.433 [2024-11-26 17:37:58.848748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:58.433 [2024-11-26 17:37:58.911750] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:58.433 [2024-11-26 17:37:58.911850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:58.433 [2024-11-26 17:37:58.911880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:58.433 [2024-11-26 17:37:58.911888] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:58.433 17:37:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.433 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:40:58.433 "name": "raid_bdev1", 00:40:58.433 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:40:58.433 "strip_size_kb": 64, 00:40:58.433 "state": "online", 00:40:58.433 "raid_level": "raid5f", 00:40:58.433 "superblock": true, 00:40:58.433 "num_base_bdevs": 3, 00:40:58.433 "num_base_bdevs_discovered": 2, 00:40:58.433 "num_base_bdevs_operational": 2, 00:40:58.433 "base_bdevs_list": [ 00:40:58.433 { 00:40:58.433 "name": null, 00:40:58.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:58.433 "is_configured": false, 00:40:58.433 "data_offset": 0, 00:40:58.433 "data_size": 63488 00:40:58.433 }, 00:40:58.433 { 00:40:58.433 "name": "BaseBdev2", 00:40:58.433 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:40:58.433 "is_configured": true, 00:40:58.433 "data_offset": 2048, 00:40:58.433 "data_size": 63488 00:40:58.433 }, 00:40:58.433 { 00:40:58.433 "name": "BaseBdev3", 00:40:58.433 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:40:58.433 "is_configured": true, 00:40:58.433 "data_offset": 2048, 00:40:58.433 "data_size": 63488 00:40:58.433 } 00:40:58.433 ] 00:40:58.433 }' 00:40:58.433 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:40:58.433 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:59.002 "name": "raid_bdev1", 00:40:59.002 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:40:59.002 "strip_size_kb": 64, 00:40:59.002 "state": "online", 00:40:59.002 "raid_level": "raid5f", 00:40:59.002 "superblock": true, 00:40:59.002 "num_base_bdevs": 3, 00:40:59.002 "num_base_bdevs_discovered": 2, 00:40:59.002 "num_base_bdevs_operational": 2, 00:40:59.002 "base_bdevs_list": [ 00:40:59.002 { 00:40:59.002 "name": null, 00:40:59.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:59.002 "is_configured": false, 00:40:59.002 "data_offset": 0, 00:40:59.002 "data_size": 63488 00:40:59.002 }, 00:40:59.002 { 00:40:59.002 "name": "BaseBdev2", 00:40:59.002 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:40:59.002 "is_configured": true, 00:40:59.002 "data_offset": 2048, 00:40:59.002 "data_size": 63488 00:40:59.002 }, 00:40:59.002 { 00:40:59.002 "name": "BaseBdev3", 00:40:59.002 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:40:59.002 "is_configured": true, 00:40:59.002 "data_offset": 2048, 00:40:59.002 "data_size": 63488 00:40:59.002 } 00:40:59.002 ] 00:40:59.002 }' 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:59.002 [2024-11-26 17:37:59.543213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:59.002 [2024-11-26 17:37:59.560977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.002 17:37:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:40:59.002 [2024-11-26 17:37:59.569679] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:59.942 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:40:59.942 "name": "raid_bdev1", 00:40:59.942 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:40:59.942 "strip_size_kb": 64, 00:40:59.942 "state": "online", 00:40:59.942 "raid_level": "raid5f", 00:40:59.942 "superblock": true, 00:40:59.942 "num_base_bdevs": 3, 00:40:59.942 "num_base_bdevs_discovered": 3, 00:40:59.942 "num_base_bdevs_operational": 3, 00:40:59.942 "process": { 00:40:59.942 "type": "rebuild", 00:40:59.942 "target": "spare", 00:40:59.942 "progress": { 00:40:59.942 "blocks": 20480, 00:40:59.942 "percent": 16 00:40:59.942 } 00:40:59.942 }, 00:40:59.942 "base_bdevs_list": [ 00:40:59.942 { 00:40:59.942 "name": "spare", 00:40:59.942 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:40:59.942 "is_configured": true, 00:40:59.942 "data_offset": 2048, 00:40:59.942 "data_size": 63488 00:40:59.942 }, 00:40:59.943 { 00:40:59.943 "name": "BaseBdev2", 00:40:59.943 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:40:59.943 "is_configured": true, 00:40:59.943 "data_offset": 2048, 00:40:59.943 "data_size": 63488 00:40:59.943 }, 00:40:59.943 { 00:40:59.943 "name": "BaseBdev3", 00:40:59.943 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:40:59.943 "is_configured": true, 00:40:59.943 "data_offset": 2048, 00:40:59.943 "data_size": 63488 00:40:59.943 } 00:40:59.943 ] 00:40:59.943 }' 00:40:59.943 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:41:00.213 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=575 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:00.213 "name": "raid_bdev1", 00:41:00.213 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:00.213 "strip_size_kb": 64, 00:41:00.213 "state": "online", 00:41:00.213 "raid_level": "raid5f", 00:41:00.213 "superblock": true, 00:41:00.213 "num_base_bdevs": 3, 00:41:00.213 "num_base_bdevs_discovered": 3, 00:41:00.213 "num_base_bdevs_operational": 3, 00:41:00.213 "process": { 00:41:00.213 "type": "rebuild", 00:41:00.213 "target": "spare", 00:41:00.213 "progress": { 00:41:00.213 "blocks": 22528, 00:41:00.213 "percent": 17 00:41:00.213 } 00:41:00.213 }, 00:41:00.213 "base_bdevs_list": [ 00:41:00.213 { 00:41:00.213 "name": "spare", 00:41:00.213 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:00.213 "is_configured": true, 00:41:00.213 "data_offset": 2048, 00:41:00.213 "data_size": 63488 00:41:00.213 }, 00:41:00.213 { 00:41:00.213 "name": "BaseBdev2", 00:41:00.213 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:00.213 "is_configured": true, 00:41:00.213 "data_offset": 2048, 00:41:00.213 "data_size": 63488 00:41:00.213 }, 00:41:00.213 { 00:41:00.213 "name": "BaseBdev3", 00:41:00.213 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:00.213 "is_configured": true, 00:41:00.213 "data_offset": 2048, 00:41:00.213 "data_size": 63488 00:41:00.213 } 00:41:00.213 ] 00:41:00.213 }' 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:00.213 17:38:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:01.170 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:01.170 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:01.170 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:01.170 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:01.170 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:01.170 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:01.170 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:01.170 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:01.170 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.170 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:01.430 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.430 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:01.430 "name": "raid_bdev1", 00:41:01.430 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:01.430 "strip_size_kb": 64, 00:41:01.430 "state": "online", 00:41:01.430 "raid_level": "raid5f", 00:41:01.430 "superblock": true, 00:41:01.430 "num_base_bdevs": 3, 00:41:01.430 "num_base_bdevs_discovered": 3, 00:41:01.430 "num_base_bdevs_operational": 3, 00:41:01.430 "process": { 00:41:01.430 "type": "rebuild", 00:41:01.430 "target": "spare", 00:41:01.430 "progress": { 00:41:01.430 "blocks": 45056, 00:41:01.430 "percent": 35 00:41:01.430 } 00:41:01.430 }, 00:41:01.430 "base_bdevs_list": [ 00:41:01.430 { 00:41:01.430 "name": "spare", 00:41:01.430 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:01.430 "is_configured": true, 00:41:01.430 "data_offset": 2048, 00:41:01.430 "data_size": 63488 00:41:01.430 }, 00:41:01.430 { 00:41:01.430 "name": "BaseBdev2", 00:41:01.430 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:01.430 "is_configured": true, 00:41:01.430 "data_offset": 2048, 00:41:01.430 "data_size": 63488 00:41:01.430 }, 00:41:01.430 { 00:41:01.430 "name": "BaseBdev3", 00:41:01.430 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:01.430 "is_configured": true, 00:41:01.430 "data_offset": 2048, 00:41:01.431 "data_size": 63488 00:41:01.431 } 00:41:01.431 ] 00:41:01.431 }' 00:41:01.431 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:01.431 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:01.431 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:01.431 17:38:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:01.431 17:38:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.371 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:02.371 "name": "raid_bdev1", 00:41:02.371 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:02.371 "strip_size_kb": 64, 00:41:02.371 "state": "online", 00:41:02.371 "raid_level": "raid5f", 00:41:02.371 "superblock": true, 00:41:02.371 "num_base_bdevs": 3, 00:41:02.371 "num_base_bdevs_discovered": 3, 00:41:02.371 "num_base_bdevs_operational": 3, 00:41:02.371 "process": { 00:41:02.371 "type": "rebuild", 00:41:02.371 "target": "spare", 00:41:02.371 "progress": { 00:41:02.371 "blocks": 69632, 00:41:02.371 "percent": 54 00:41:02.371 } 00:41:02.371 }, 00:41:02.371 "base_bdevs_list": [ 00:41:02.371 { 00:41:02.371 "name": "spare", 00:41:02.371 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:02.371 "is_configured": true, 00:41:02.371 "data_offset": 2048, 00:41:02.371 "data_size": 63488 00:41:02.371 }, 00:41:02.371 { 00:41:02.371 "name": "BaseBdev2", 00:41:02.371 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:02.371 "is_configured": true, 00:41:02.372 "data_offset": 2048, 00:41:02.372 "data_size": 63488 00:41:02.372 }, 00:41:02.372 { 00:41:02.372 "name": "BaseBdev3", 00:41:02.372 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:02.372 "is_configured": true, 00:41:02.372 "data_offset": 2048, 00:41:02.372 "data_size": 63488 00:41:02.372 } 00:41:02.372 ] 00:41:02.372 }' 00:41:02.372 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:02.632 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:02.632 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:02.632 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:02.632 17:38:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:03.572 "name": "raid_bdev1", 00:41:03.572 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:03.572 "strip_size_kb": 64, 00:41:03.572 "state": "online", 00:41:03.572 "raid_level": "raid5f", 00:41:03.572 "superblock": true, 00:41:03.572 "num_base_bdevs": 3, 00:41:03.572 "num_base_bdevs_discovered": 3, 00:41:03.572 "num_base_bdevs_operational": 3, 00:41:03.572 "process": { 00:41:03.572 "type": "rebuild", 00:41:03.572 "target": "spare", 00:41:03.572 "progress": { 00:41:03.572 "blocks": 92160, 00:41:03.572 "percent": 72 00:41:03.572 } 00:41:03.572 }, 00:41:03.572 "base_bdevs_list": [ 00:41:03.572 { 00:41:03.572 "name": "spare", 00:41:03.572 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:03.572 "is_configured": true, 00:41:03.572 "data_offset": 2048, 00:41:03.572 "data_size": 63488 00:41:03.572 }, 00:41:03.572 { 00:41:03.572 "name": "BaseBdev2", 00:41:03.572 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:03.572 "is_configured": true, 00:41:03.572 "data_offset": 2048, 00:41:03.572 "data_size": 63488 00:41:03.572 }, 00:41:03.572 { 00:41:03.572 "name": "BaseBdev3", 00:41:03.572 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:03.572 "is_configured": true, 00:41:03.572 "data_offset": 2048, 00:41:03.572 "data_size": 63488 00:41:03.572 } 00:41:03.572 ] 00:41:03.572 }' 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:03.572 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:03.832 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:03.832 17:38:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:04.773 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:04.773 "name": "raid_bdev1", 00:41:04.773 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:04.773 "strip_size_kb": 64, 00:41:04.773 "state": "online", 00:41:04.773 "raid_level": "raid5f", 00:41:04.773 "superblock": true, 00:41:04.773 "num_base_bdevs": 3, 00:41:04.773 "num_base_bdevs_discovered": 3, 00:41:04.773 "num_base_bdevs_operational": 3, 00:41:04.773 "process": { 00:41:04.773 "type": "rebuild", 00:41:04.773 "target": "spare", 00:41:04.773 "progress": { 00:41:04.773 "blocks": 114688, 00:41:04.773 "percent": 90 00:41:04.773 } 00:41:04.773 }, 00:41:04.773 "base_bdevs_list": [ 00:41:04.773 { 00:41:04.773 "name": "spare", 00:41:04.773 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:04.773 "is_configured": true, 00:41:04.773 "data_offset": 2048, 00:41:04.773 "data_size": 63488 00:41:04.773 }, 00:41:04.773 { 00:41:04.773 "name": "BaseBdev2", 00:41:04.773 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:04.773 "is_configured": true, 00:41:04.773 "data_offset": 2048, 00:41:04.773 "data_size": 63488 00:41:04.773 }, 00:41:04.773 { 00:41:04.773 "name": "BaseBdev3", 00:41:04.774 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:04.774 "is_configured": true, 00:41:04.774 "data_offset": 2048, 00:41:04.774 "data_size": 63488 00:41:04.774 } 00:41:04.774 ] 00:41:04.774 }' 00:41:04.774 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:04.774 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:04.774 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:04.774 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:04.774 17:38:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:05.343 [2024-11-26 17:38:05.824453] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:05.343 [2024-11-26 17:38:05.824608] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:05.343 [2024-11-26 17:38:05.824752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:05.913 "name": "raid_bdev1", 00:41:05.913 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:05.913 "strip_size_kb": 64, 00:41:05.913 "state": "online", 00:41:05.913 "raid_level": "raid5f", 00:41:05.913 "superblock": true, 00:41:05.913 "num_base_bdevs": 3, 00:41:05.913 "num_base_bdevs_discovered": 3, 00:41:05.913 "num_base_bdevs_operational": 3, 00:41:05.913 "base_bdevs_list": [ 00:41:05.913 { 00:41:05.913 "name": "spare", 00:41:05.913 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:05.913 "is_configured": true, 00:41:05.913 "data_offset": 2048, 00:41:05.913 "data_size": 63488 00:41:05.913 }, 00:41:05.913 { 00:41:05.913 "name": "BaseBdev2", 00:41:05.913 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:05.913 "is_configured": true, 00:41:05.913 "data_offset": 2048, 00:41:05.913 "data_size": 63488 00:41:05.913 }, 00:41:05.913 { 00:41:05.913 "name": "BaseBdev3", 00:41:05.913 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:05.913 "is_configured": true, 00:41:05.913 "data_offset": 2048, 00:41:05.913 "data_size": 63488 00:41:05.913 } 00:41:05.913 ] 00:41:05.913 }' 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:05.913 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:06.173 "name": "raid_bdev1", 00:41:06.173 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:06.173 "strip_size_kb": 64, 00:41:06.173 "state": "online", 00:41:06.173 "raid_level": "raid5f", 00:41:06.173 "superblock": true, 00:41:06.173 "num_base_bdevs": 3, 00:41:06.173 "num_base_bdevs_discovered": 3, 00:41:06.173 "num_base_bdevs_operational": 3, 00:41:06.173 "base_bdevs_list": [ 00:41:06.173 { 00:41:06.173 "name": "spare", 00:41:06.173 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:06.173 "is_configured": true, 00:41:06.173 "data_offset": 2048, 00:41:06.173 "data_size": 63488 00:41:06.173 }, 00:41:06.173 { 00:41:06.173 "name": "BaseBdev2", 00:41:06.173 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:06.173 "is_configured": true, 00:41:06.173 "data_offset": 2048, 00:41:06.173 "data_size": 63488 00:41:06.173 }, 00:41:06.173 { 00:41:06.173 "name": "BaseBdev3", 00:41:06.173 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:06.173 "is_configured": true, 00:41:06.173 "data_offset": 2048, 00:41:06.173 "data_size": 63488 00:41:06.173 } 00:41:06.173 ] 00:41:06.173 }' 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:06.173 "name": "raid_bdev1", 00:41:06.173 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:06.173 "strip_size_kb": 64, 00:41:06.173 "state": "online", 00:41:06.173 "raid_level": "raid5f", 00:41:06.173 "superblock": true, 00:41:06.173 "num_base_bdevs": 3, 00:41:06.173 "num_base_bdevs_discovered": 3, 00:41:06.173 "num_base_bdevs_operational": 3, 00:41:06.173 "base_bdevs_list": [ 00:41:06.173 { 00:41:06.173 "name": "spare", 00:41:06.173 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:06.173 "is_configured": true, 00:41:06.173 "data_offset": 2048, 00:41:06.173 "data_size": 63488 00:41:06.173 }, 00:41:06.173 { 00:41:06.173 "name": "BaseBdev2", 00:41:06.173 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:06.173 "is_configured": true, 00:41:06.173 "data_offset": 2048, 00:41:06.173 "data_size": 63488 00:41:06.173 }, 00:41:06.173 { 00:41:06.173 "name": "BaseBdev3", 00:41:06.173 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:06.173 "is_configured": true, 00:41:06.173 "data_offset": 2048, 00:41:06.173 "data_size": 63488 00:41:06.173 } 00:41:06.173 ] 00:41:06.173 }' 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:06.173 17:38:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:06.433 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:06.433 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.433 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:06.692 [2024-11-26 17:38:07.126936] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:06.692 [2024-11-26 17:38:07.126969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:06.692 [2024-11-26 17:38:07.127063] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:06.692 [2024-11-26 17:38:07.127149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:06.692 [2024-11-26 17:38:07.127170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:06.692 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:06.693 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:41:06.693 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:06.693 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:06.693 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:41:06.693 /dev/nbd0 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:06.951 1+0 records in 00:41:06.951 1+0 records out 00:41:06.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368017 s, 11.1 MB/s 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:06.951 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:41:06.951 /dev/nbd1 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:07.215 1+0 records in 00:41:07.215 1+0 records out 00:41:07.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473942 s, 8.6 MB/s 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:07.215 17:38:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:41:07.475 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:07.475 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:07.475 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:07.475 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:07.475 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:07.475 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:07.475 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:41:07.475 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:41:07.475 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:07.475 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:07.735 [2024-11-26 17:38:08.359777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:07.735 [2024-11-26 17:38:08.359855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:07.735 [2024-11-26 17:38:08.359881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:41:07.735 [2024-11-26 17:38:08.359893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:07.735 [2024-11-26 17:38:08.362635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:07.735 [2024-11-26 17:38:08.362673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:07.735 [2024-11-26 17:38:08.362763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:41:07.735 [2024-11-26 17:38:08.362816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:07.735 [2024-11-26 17:38:08.362984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:07.735 [2024-11-26 17:38:08.363115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:07.735 spare 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.735 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:07.996 [2024-11-26 17:38:08.463027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:41:07.996 [2024-11-26 17:38:08.463060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:41:07.996 [2024-11-26 17:38:08.463364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:41:07.996 [2024-11-26 17:38:08.468922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:41:07.996 [2024-11-26 17:38:08.468946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:41:07.996 [2024-11-26 17:38:08.469152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.996 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:07.996 "name": "raid_bdev1", 00:41:07.996 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:07.996 "strip_size_kb": 64, 00:41:07.996 "state": "online", 00:41:07.996 "raid_level": "raid5f", 00:41:07.996 "superblock": true, 00:41:07.996 "num_base_bdevs": 3, 00:41:07.996 "num_base_bdevs_discovered": 3, 00:41:07.996 "num_base_bdevs_operational": 3, 00:41:07.996 "base_bdevs_list": [ 00:41:07.996 { 00:41:07.996 "name": "spare", 00:41:07.996 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:07.996 "is_configured": true, 00:41:07.996 "data_offset": 2048, 00:41:07.996 "data_size": 63488 00:41:07.996 }, 00:41:07.996 { 00:41:07.996 "name": "BaseBdev2", 00:41:07.996 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:07.996 "is_configured": true, 00:41:07.996 "data_offset": 2048, 00:41:07.996 "data_size": 63488 00:41:07.996 }, 00:41:07.996 { 00:41:07.996 "name": "BaseBdev3", 00:41:07.997 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:07.997 "is_configured": true, 00:41:07.997 "data_offset": 2048, 00:41:07.997 "data_size": 63488 00:41:07.997 } 00:41:07.997 ] 00:41:07.997 }' 00:41:07.997 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:07.997 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:08.567 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:08.567 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:08.567 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:08.567 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:08.567 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:08.567 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:08.567 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:08.567 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.567 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:08.567 17:38:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:08.567 "name": "raid_bdev1", 00:41:08.567 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:08.567 "strip_size_kb": 64, 00:41:08.567 "state": "online", 00:41:08.567 "raid_level": "raid5f", 00:41:08.567 "superblock": true, 00:41:08.567 "num_base_bdevs": 3, 00:41:08.567 "num_base_bdevs_discovered": 3, 00:41:08.567 "num_base_bdevs_operational": 3, 00:41:08.567 "base_bdevs_list": [ 00:41:08.567 { 00:41:08.567 "name": "spare", 00:41:08.567 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:08.567 "is_configured": true, 00:41:08.567 "data_offset": 2048, 00:41:08.567 "data_size": 63488 00:41:08.567 }, 00:41:08.567 { 00:41:08.567 "name": "BaseBdev2", 00:41:08.567 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:08.567 "is_configured": true, 00:41:08.567 "data_offset": 2048, 00:41:08.567 "data_size": 63488 00:41:08.567 }, 00:41:08.567 { 00:41:08.567 "name": "BaseBdev3", 00:41:08.567 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:08.567 "is_configured": true, 00:41:08.567 "data_offset": 2048, 00:41:08.567 "data_size": 63488 00:41:08.567 } 00:41:08.567 ] 00:41:08.567 }' 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:08.567 [2024-11-26 17:38:09.135164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:08.567 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:08.568 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.568 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:08.568 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:08.568 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.568 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:08.568 "name": "raid_bdev1", 00:41:08.568 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:08.568 "strip_size_kb": 64, 00:41:08.568 "state": "online", 00:41:08.568 "raid_level": "raid5f", 00:41:08.568 "superblock": true, 00:41:08.568 "num_base_bdevs": 3, 00:41:08.568 "num_base_bdevs_discovered": 2, 00:41:08.568 "num_base_bdevs_operational": 2, 00:41:08.568 "base_bdevs_list": [ 00:41:08.568 { 00:41:08.568 "name": null, 00:41:08.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:08.568 "is_configured": false, 00:41:08.568 "data_offset": 0, 00:41:08.568 "data_size": 63488 00:41:08.568 }, 00:41:08.568 { 00:41:08.568 "name": "BaseBdev2", 00:41:08.568 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:08.568 "is_configured": true, 00:41:08.568 "data_offset": 2048, 00:41:08.568 "data_size": 63488 00:41:08.568 }, 00:41:08.568 { 00:41:08.568 "name": "BaseBdev3", 00:41:08.568 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:08.568 "is_configured": true, 00:41:08.568 "data_offset": 2048, 00:41:08.568 "data_size": 63488 00:41:08.568 } 00:41:08.568 ] 00:41:08.568 }' 00:41:08.568 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:08.568 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:09.137 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:09.137 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:09.137 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:09.137 [2024-11-26 17:38:09.630363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:09.137 [2024-11-26 17:38:09.630615] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:41:09.137 [2024-11-26 17:38:09.630635] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:41:09.137 [2024-11-26 17:38:09.630692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:09.137 [2024-11-26 17:38:09.646774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:41:09.137 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:09.137 17:38:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:41:09.137 [2024-11-26 17:38:09.654319] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:10.076 "name": "raid_bdev1", 00:41:10.076 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:10.076 "strip_size_kb": 64, 00:41:10.076 "state": "online", 00:41:10.076 "raid_level": "raid5f", 00:41:10.076 "superblock": true, 00:41:10.076 "num_base_bdevs": 3, 00:41:10.076 "num_base_bdevs_discovered": 3, 00:41:10.076 "num_base_bdevs_operational": 3, 00:41:10.076 "process": { 00:41:10.076 "type": "rebuild", 00:41:10.076 "target": "spare", 00:41:10.076 "progress": { 00:41:10.076 "blocks": 20480, 00:41:10.076 "percent": 16 00:41:10.076 } 00:41:10.076 }, 00:41:10.076 "base_bdevs_list": [ 00:41:10.076 { 00:41:10.076 "name": "spare", 00:41:10.076 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:10.076 "is_configured": true, 00:41:10.076 "data_offset": 2048, 00:41:10.076 "data_size": 63488 00:41:10.076 }, 00:41:10.076 { 00:41:10.076 "name": "BaseBdev2", 00:41:10.076 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:10.076 "is_configured": true, 00:41:10.076 "data_offset": 2048, 00:41:10.076 "data_size": 63488 00:41:10.076 }, 00:41:10.076 { 00:41:10.076 "name": "BaseBdev3", 00:41:10.076 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:10.076 "is_configured": true, 00:41:10.076 "data_offset": 2048, 00:41:10.076 "data_size": 63488 00:41:10.076 } 00:41:10.076 ] 00:41:10.076 }' 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:10.076 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:10.335 [2024-11-26 17:38:10.789756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:10.335 [2024-11-26 17:38:10.865186] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:10.335 [2024-11-26 17:38:10.865256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:10.335 [2024-11-26 17:38:10.865273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:10.335 [2024-11-26 17:38:10.865284] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.335 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:10.335 "name": "raid_bdev1", 00:41:10.335 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:10.335 "strip_size_kb": 64, 00:41:10.335 "state": "online", 00:41:10.335 "raid_level": "raid5f", 00:41:10.335 "superblock": true, 00:41:10.335 "num_base_bdevs": 3, 00:41:10.335 "num_base_bdevs_discovered": 2, 00:41:10.335 "num_base_bdevs_operational": 2, 00:41:10.335 "base_bdevs_list": [ 00:41:10.335 { 00:41:10.335 "name": null, 00:41:10.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:10.336 "is_configured": false, 00:41:10.336 "data_offset": 0, 00:41:10.336 "data_size": 63488 00:41:10.336 }, 00:41:10.336 { 00:41:10.336 "name": "BaseBdev2", 00:41:10.336 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:10.336 "is_configured": true, 00:41:10.336 "data_offset": 2048, 00:41:10.336 "data_size": 63488 00:41:10.336 }, 00:41:10.336 { 00:41:10.336 "name": "BaseBdev3", 00:41:10.336 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:10.336 "is_configured": true, 00:41:10.336 "data_offset": 2048, 00:41:10.336 "data_size": 63488 00:41:10.336 } 00:41:10.336 ] 00:41:10.336 }' 00:41:10.336 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:10.336 17:38:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:10.904 17:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:10.904 17:38:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.904 17:38:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:10.904 [2024-11-26 17:38:11.362462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:10.904 [2024-11-26 17:38:11.362595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:10.904 [2024-11-26 17:38:11.362627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:41:10.904 [2024-11-26 17:38:11.362650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:10.904 [2024-11-26 17:38:11.363322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:10.904 [2024-11-26 17:38:11.363359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:10.904 [2024-11-26 17:38:11.363501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:41:10.904 [2024-11-26 17:38:11.363546] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:41:10.904 [2024-11-26 17:38:11.363561] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:41:10.904 [2024-11-26 17:38:11.363594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:10.904 [2024-11-26 17:38:11.382317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:41:10.904 spare 00:41:10.904 17:38:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.905 17:38:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:41:10.905 [2024-11-26 17:38:11.390297] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:11.853 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:11.853 "name": "raid_bdev1", 00:41:11.853 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:11.853 "strip_size_kb": 64, 00:41:11.853 "state": "online", 00:41:11.853 "raid_level": "raid5f", 00:41:11.853 "superblock": true, 00:41:11.853 "num_base_bdevs": 3, 00:41:11.853 "num_base_bdevs_discovered": 3, 00:41:11.853 "num_base_bdevs_operational": 3, 00:41:11.853 "process": { 00:41:11.853 "type": "rebuild", 00:41:11.853 "target": "spare", 00:41:11.853 "progress": { 00:41:11.853 "blocks": 18432, 00:41:11.853 "percent": 14 00:41:11.853 } 00:41:11.853 }, 00:41:11.853 "base_bdevs_list": [ 00:41:11.853 { 00:41:11.853 "name": "spare", 00:41:11.853 "uuid": "3ee66f60-4285-5bd5-be38-04d3cfbcdb2e", 00:41:11.853 "is_configured": true, 00:41:11.853 "data_offset": 2048, 00:41:11.853 "data_size": 63488 00:41:11.853 }, 00:41:11.853 { 00:41:11.853 "name": "BaseBdev2", 00:41:11.853 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:11.853 "is_configured": true, 00:41:11.853 "data_offset": 2048, 00:41:11.853 "data_size": 63488 00:41:11.853 }, 00:41:11.853 { 00:41:11.853 "name": "BaseBdev3", 00:41:11.853 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:11.853 "is_configured": true, 00:41:11.854 "data_offset": 2048, 00:41:11.854 "data_size": 63488 00:41:11.854 } 00:41:11.854 ] 00:41:11.854 }' 00:41:11.854 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:11.854 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:11.854 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:11.854 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:11.854 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:41:11.854 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:11.854 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:11.854 [2024-11-26 17:38:12.525536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:12.128 [2024-11-26 17:38:12.604596] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:12.128 [2024-11-26 17:38:12.604691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:12.128 [2024-11-26 17:38:12.604717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:12.128 [2024-11-26 17:38:12.604728] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:12.128 "name": "raid_bdev1", 00:41:12.128 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:12.128 "strip_size_kb": 64, 00:41:12.128 "state": "online", 00:41:12.128 "raid_level": "raid5f", 00:41:12.128 "superblock": true, 00:41:12.128 "num_base_bdevs": 3, 00:41:12.128 "num_base_bdevs_discovered": 2, 00:41:12.128 "num_base_bdevs_operational": 2, 00:41:12.128 "base_bdevs_list": [ 00:41:12.128 { 00:41:12.128 "name": null, 00:41:12.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:12.128 "is_configured": false, 00:41:12.128 "data_offset": 0, 00:41:12.128 "data_size": 63488 00:41:12.128 }, 00:41:12.128 { 00:41:12.128 "name": "BaseBdev2", 00:41:12.128 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:12.128 "is_configured": true, 00:41:12.128 "data_offset": 2048, 00:41:12.128 "data_size": 63488 00:41:12.128 }, 00:41:12.128 { 00:41:12.128 "name": "BaseBdev3", 00:41:12.128 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:12.128 "is_configured": true, 00:41:12.128 "data_offset": 2048, 00:41:12.128 "data_size": 63488 00:41:12.128 } 00:41:12.128 ] 00:41:12.128 }' 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:12.128 17:38:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:12.387 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:12.387 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:12.387 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:12.387 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:12.387 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:12.648 "name": "raid_bdev1", 00:41:12.648 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:12.648 "strip_size_kb": 64, 00:41:12.648 "state": "online", 00:41:12.648 "raid_level": "raid5f", 00:41:12.648 "superblock": true, 00:41:12.648 "num_base_bdevs": 3, 00:41:12.648 "num_base_bdevs_discovered": 2, 00:41:12.648 "num_base_bdevs_operational": 2, 00:41:12.648 "base_bdevs_list": [ 00:41:12.648 { 00:41:12.648 "name": null, 00:41:12.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:12.648 "is_configured": false, 00:41:12.648 "data_offset": 0, 00:41:12.648 "data_size": 63488 00:41:12.648 }, 00:41:12.648 { 00:41:12.648 "name": "BaseBdev2", 00:41:12.648 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:12.648 "is_configured": true, 00:41:12.648 "data_offset": 2048, 00:41:12.648 "data_size": 63488 00:41:12.648 }, 00:41:12.648 { 00:41:12.648 "name": "BaseBdev3", 00:41:12.648 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:12.648 "is_configured": true, 00:41:12.648 "data_offset": 2048, 00:41:12.648 "data_size": 63488 00:41:12.648 } 00:41:12.648 ] 00:41:12.648 }' 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:12.648 [2024-11-26 17:38:13.261455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:12.648 [2024-11-26 17:38:13.261548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:12.648 [2024-11-26 17:38:13.261582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:41:12.648 [2024-11-26 17:38:13.261594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:12.648 [2024-11-26 17:38:13.262174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:12.648 [2024-11-26 17:38:13.262201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:12.648 [2024-11-26 17:38:13.262301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:41:12.648 [2024-11-26 17:38:13.262328] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:41:12.648 [2024-11-26 17:38:13.262353] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:41:12.648 [2024-11-26 17:38:13.262367] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:41:12.648 BaseBdev1 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:12.648 17:38:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:41:13.587 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:41:13.587 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:13.587 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:13.587 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:13.587 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:13.587 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:13.587 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:13.587 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:13.587 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:13.587 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:13.847 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:13.847 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:13.847 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:13.847 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:13.847 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:13.847 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:13.847 "name": "raid_bdev1", 00:41:13.847 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:13.847 "strip_size_kb": 64, 00:41:13.847 "state": "online", 00:41:13.847 "raid_level": "raid5f", 00:41:13.847 "superblock": true, 00:41:13.847 "num_base_bdevs": 3, 00:41:13.847 "num_base_bdevs_discovered": 2, 00:41:13.847 "num_base_bdevs_operational": 2, 00:41:13.847 "base_bdevs_list": [ 00:41:13.847 { 00:41:13.847 "name": null, 00:41:13.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:13.847 "is_configured": false, 00:41:13.847 "data_offset": 0, 00:41:13.847 "data_size": 63488 00:41:13.847 }, 00:41:13.847 { 00:41:13.847 "name": "BaseBdev2", 00:41:13.847 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:13.847 "is_configured": true, 00:41:13.847 "data_offset": 2048, 00:41:13.847 "data_size": 63488 00:41:13.847 }, 00:41:13.847 { 00:41:13.847 "name": "BaseBdev3", 00:41:13.847 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:13.847 "is_configured": true, 00:41:13.847 "data_offset": 2048, 00:41:13.847 "data_size": 63488 00:41:13.847 } 00:41:13.847 ] 00:41:13.847 }' 00:41:13.847 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:13.847 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:14.107 "name": "raid_bdev1", 00:41:14.107 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:14.107 "strip_size_kb": 64, 00:41:14.107 "state": "online", 00:41:14.107 "raid_level": "raid5f", 00:41:14.107 "superblock": true, 00:41:14.107 "num_base_bdevs": 3, 00:41:14.107 "num_base_bdevs_discovered": 2, 00:41:14.107 "num_base_bdevs_operational": 2, 00:41:14.107 "base_bdevs_list": [ 00:41:14.107 { 00:41:14.107 "name": null, 00:41:14.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:14.107 "is_configured": false, 00:41:14.107 "data_offset": 0, 00:41:14.107 "data_size": 63488 00:41:14.107 }, 00:41:14.107 { 00:41:14.107 "name": "BaseBdev2", 00:41:14.107 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:14.107 "is_configured": true, 00:41:14.107 "data_offset": 2048, 00:41:14.107 "data_size": 63488 00:41:14.107 }, 00:41:14.107 { 00:41:14.107 "name": "BaseBdev3", 00:41:14.107 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:14.107 "is_configured": true, 00:41:14.107 "data_offset": 2048, 00:41:14.107 "data_size": 63488 00:41:14.107 } 00:41:14.107 ] 00:41:14.107 }' 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:14.107 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:14.367 [2024-11-26 17:38:14.838850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:14.367 [2024-11-26 17:38:14.839071] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:41:14.367 [2024-11-26 17:38:14.839099] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:41:14.367 request: 00:41:14.367 { 00:41:14.367 "base_bdev": "BaseBdev1", 00:41:14.367 "raid_bdev": "raid_bdev1", 00:41:14.367 "method": "bdev_raid_add_base_bdev", 00:41:14.367 "req_id": 1 00:41:14.367 } 00:41:14.367 Got JSON-RPC error response 00:41:14.367 response: 00:41:14.367 { 00:41:14.367 "code": -22, 00:41:14.367 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:41:14.367 } 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:14.367 17:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:15.306 "name": "raid_bdev1", 00:41:15.306 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:15.306 "strip_size_kb": 64, 00:41:15.306 "state": "online", 00:41:15.306 "raid_level": "raid5f", 00:41:15.306 "superblock": true, 00:41:15.306 "num_base_bdevs": 3, 00:41:15.306 "num_base_bdevs_discovered": 2, 00:41:15.306 "num_base_bdevs_operational": 2, 00:41:15.306 "base_bdevs_list": [ 00:41:15.306 { 00:41:15.306 "name": null, 00:41:15.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:15.306 "is_configured": false, 00:41:15.306 "data_offset": 0, 00:41:15.306 "data_size": 63488 00:41:15.306 }, 00:41:15.306 { 00:41:15.306 "name": "BaseBdev2", 00:41:15.306 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:15.306 "is_configured": true, 00:41:15.306 "data_offset": 2048, 00:41:15.306 "data_size": 63488 00:41:15.306 }, 00:41:15.306 { 00:41:15.306 "name": "BaseBdev3", 00:41:15.306 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:15.306 "is_configured": true, 00:41:15.306 "data_offset": 2048, 00:41:15.306 "data_size": 63488 00:41:15.306 } 00:41:15.306 ] 00:41:15.306 }' 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:15.306 17:38:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:15.876 "name": "raid_bdev1", 00:41:15.876 "uuid": "c925067f-5896-4dcc-9fd1-bc247a397390", 00:41:15.876 "strip_size_kb": 64, 00:41:15.876 "state": "online", 00:41:15.876 "raid_level": "raid5f", 00:41:15.876 "superblock": true, 00:41:15.876 "num_base_bdevs": 3, 00:41:15.876 "num_base_bdevs_discovered": 2, 00:41:15.876 "num_base_bdevs_operational": 2, 00:41:15.876 "base_bdevs_list": [ 00:41:15.876 { 00:41:15.876 "name": null, 00:41:15.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:15.876 "is_configured": false, 00:41:15.876 "data_offset": 0, 00:41:15.876 "data_size": 63488 00:41:15.876 }, 00:41:15.876 { 00:41:15.876 "name": "BaseBdev2", 00:41:15.876 "uuid": "eac02e18-a855-58d3-9002-0d96267f4e46", 00:41:15.876 "is_configured": true, 00:41:15.876 "data_offset": 2048, 00:41:15.876 "data_size": 63488 00:41:15.876 }, 00:41:15.876 { 00:41:15.876 "name": "BaseBdev3", 00:41:15.876 "uuid": "bbdfbcb6-1a16-5917-97db-cebf1ac9d6fb", 00:41:15.876 "is_configured": true, 00:41:15.876 "data_offset": 2048, 00:41:15.876 "data_size": 63488 00:41:15.876 } 00:41:15.876 ] 00:41:15.876 }' 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82314 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82314 ']' 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82314 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82314 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:15.876 killing process with pid 82314 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82314' 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82314 00:41:15.876 Received shutdown signal, test time was about 60.000000 seconds 00:41:15.876 00:41:15.876 Latency(us) 00:41:15.876 [2024-11-26T17:38:16.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:15.876 [2024-11-26T17:38:16.571Z] =================================================================================================================== 00:41:15.876 [2024-11-26T17:38:16.571Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:15.876 [2024-11-26 17:38:16.421271] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:15.876 17:38:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82314 00:41:15.876 [2024-11-26 17:38:16.421459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:15.876 [2024-11-26 17:38:16.421566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:15.876 [2024-11-26 17:38:16.421583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:41:16.445 [2024-11-26 17:38:16.866916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:17.848 17:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:41:17.848 00:41:17.848 real 0m23.500s 00:41:17.848 user 0m29.978s 00:41:17.848 sys 0m2.757s 00:41:17.848 17:38:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:17.848 17:38:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:17.848 ************************************ 00:41:17.848 END TEST raid5f_rebuild_test_sb 00:41:17.848 ************************************ 00:41:17.848 17:38:18 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:41:17.848 17:38:18 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:41:17.848 17:38:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:41:17.848 17:38:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:17.848 17:38:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:17.848 ************************************ 00:41:17.848 START TEST raid5f_state_function_test 00:41:17.848 ************************************ 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83070 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:41:17.848 Process raid pid: 83070 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83070' 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83070 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83070 ']' 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:17.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:17.848 17:38:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:17.848 [2024-11-26 17:38:18.302872] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:41:17.848 [2024-11-26 17:38:18.303016] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:17.848 [2024-11-26 17:38:18.485149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.109 [2024-11-26 17:38:18.607474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.368 [2024-11-26 17:38:18.818418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:18.368 [2024-11-26 17:38:18.818477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:18.628 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:18.628 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:41:18.628 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:41:18.628 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.628 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:18.628 [2024-11-26 17:38:19.167795] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:18.628 [2024-11-26 17:38:19.167860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:18.628 [2024-11-26 17:38:19.167871] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:18.628 [2024-11-26 17:38:19.167881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:18.628 [2024-11-26 17:38:19.167887] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:41:18.628 [2024-11-26 17:38:19.167897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:41:18.628 [2024-11-26 17:38:19.167903] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:41:18.628 [2024-11-26 17:38:19.167912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:41:18.628 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.628 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:18.628 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:18.628 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:18.629 "name": "Existed_Raid", 00:41:18.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:18.629 "strip_size_kb": 64, 00:41:18.629 "state": "configuring", 00:41:18.629 "raid_level": "raid5f", 00:41:18.629 "superblock": false, 00:41:18.629 "num_base_bdevs": 4, 00:41:18.629 "num_base_bdevs_discovered": 0, 00:41:18.629 "num_base_bdevs_operational": 4, 00:41:18.629 "base_bdevs_list": [ 00:41:18.629 { 00:41:18.629 "name": "BaseBdev1", 00:41:18.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:18.629 "is_configured": false, 00:41:18.629 "data_offset": 0, 00:41:18.629 "data_size": 0 00:41:18.629 }, 00:41:18.629 { 00:41:18.629 "name": "BaseBdev2", 00:41:18.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:18.629 "is_configured": false, 00:41:18.629 "data_offset": 0, 00:41:18.629 "data_size": 0 00:41:18.629 }, 00:41:18.629 { 00:41:18.629 "name": "BaseBdev3", 00:41:18.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:18.629 "is_configured": false, 00:41:18.629 "data_offset": 0, 00:41:18.629 "data_size": 0 00:41:18.629 }, 00:41:18.629 { 00:41:18.629 "name": "BaseBdev4", 00:41:18.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:18.629 "is_configured": false, 00:41:18.629 "data_offset": 0, 00:41:18.629 "data_size": 0 00:41:18.629 } 00:41:18.629 ] 00:41:18.629 }' 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:18.629 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.200 [2024-11-26 17:38:19.587056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:19.200 [2024-11-26 17:38:19.587110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.200 [2024-11-26 17:38:19.599026] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:19.200 [2024-11-26 17:38:19.599075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:19.200 [2024-11-26 17:38:19.599085] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:19.200 [2024-11-26 17:38:19.599095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:19.200 [2024-11-26 17:38:19.599101] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:41:19.200 [2024-11-26 17:38:19.599112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:41:19.200 [2024-11-26 17:38:19.599118] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:41:19.200 [2024-11-26 17:38:19.599127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.200 [2024-11-26 17:38:19.654459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:19.200 BaseBdev1 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.200 [ 00:41:19.200 { 00:41:19.200 "name": "BaseBdev1", 00:41:19.200 "aliases": [ 00:41:19.200 "e1fa3023-dd52-418d-91ce-93f1e4a35bdd" 00:41:19.200 ], 00:41:19.200 "product_name": "Malloc disk", 00:41:19.200 "block_size": 512, 00:41:19.200 "num_blocks": 65536, 00:41:19.200 "uuid": "e1fa3023-dd52-418d-91ce-93f1e4a35bdd", 00:41:19.200 "assigned_rate_limits": { 00:41:19.200 "rw_ios_per_sec": 0, 00:41:19.200 "rw_mbytes_per_sec": 0, 00:41:19.200 "r_mbytes_per_sec": 0, 00:41:19.200 "w_mbytes_per_sec": 0 00:41:19.200 }, 00:41:19.200 "claimed": true, 00:41:19.200 "claim_type": "exclusive_write", 00:41:19.200 "zoned": false, 00:41:19.200 "supported_io_types": { 00:41:19.200 "read": true, 00:41:19.200 "write": true, 00:41:19.200 "unmap": true, 00:41:19.200 "flush": true, 00:41:19.200 "reset": true, 00:41:19.200 "nvme_admin": false, 00:41:19.200 "nvme_io": false, 00:41:19.200 "nvme_io_md": false, 00:41:19.200 "write_zeroes": true, 00:41:19.200 "zcopy": true, 00:41:19.200 "get_zone_info": false, 00:41:19.200 "zone_management": false, 00:41:19.200 "zone_append": false, 00:41:19.200 "compare": false, 00:41:19.200 "compare_and_write": false, 00:41:19.200 "abort": true, 00:41:19.200 "seek_hole": false, 00:41:19.200 "seek_data": false, 00:41:19.200 "copy": true, 00:41:19.200 "nvme_iov_md": false 00:41:19.200 }, 00:41:19.200 "memory_domains": [ 00:41:19.200 { 00:41:19.200 "dma_device_id": "system", 00:41:19.200 "dma_device_type": 1 00:41:19.200 }, 00:41:19.200 { 00:41:19.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:19.200 "dma_device_type": 2 00:41:19.200 } 00:41:19.200 ], 00:41:19.200 "driver_specific": {} 00:41:19.200 } 00:41:19.200 ] 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:19.200 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:19.201 "name": "Existed_Raid", 00:41:19.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:19.201 "strip_size_kb": 64, 00:41:19.201 "state": "configuring", 00:41:19.201 "raid_level": "raid5f", 00:41:19.201 "superblock": false, 00:41:19.201 "num_base_bdevs": 4, 00:41:19.201 "num_base_bdevs_discovered": 1, 00:41:19.201 "num_base_bdevs_operational": 4, 00:41:19.201 "base_bdevs_list": [ 00:41:19.201 { 00:41:19.201 "name": "BaseBdev1", 00:41:19.201 "uuid": "e1fa3023-dd52-418d-91ce-93f1e4a35bdd", 00:41:19.201 "is_configured": true, 00:41:19.201 "data_offset": 0, 00:41:19.201 "data_size": 65536 00:41:19.201 }, 00:41:19.201 { 00:41:19.201 "name": "BaseBdev2", 00:41:19.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:19.201 "is_configured": false, 00:41:19.201 "data_offset": 0, 00:41:19.201 "data_size": 0 00:41:19.201 }, 00:41:19.201 { 00:41:19.201 "name": "BaseBdev3", 00:41:19.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:19.201 "is_configured": false, 00:41:19.201 "data_offset": 0, 00:41:19.201 "data_size": 0 00:41:19.201 }, 00:41:19.201 { 00:41:19.201 "name": "BaseBdev4", 00:41:19.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:19.201 "is_configured": false, 00:41:19.201 "data_offset": 0, 00:41:19.201 "data_size": 0 00:41:19.201 } 00:41:19.201 ] 00:41:19.201 }' 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:19.201 17:38:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.461 [2024-11-26 17:38:20.101751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:19.461 [2024-11-26 17:38:20.101808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.461 [2024-11-26 17:38:20.117758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:19.461 [2024-11-26 17:38:20.119833] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:19.461 [2024-11-26 17:38:20.119874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:19.461 [2024-11-26 17:38:20.119883] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:41:19.461 [2024-11-26 17:38:20.119894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:41:19.461 [2024-11-26 17:38:20.119900] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:41:19.461 [2024-11-26 17:38:20.119909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.461 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.722 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:19.722 "name": "Existed_Raid", 00:41:19.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:19.722 "strip_size_kb": 64, 00:41:19.722 "state": "configuring", 00:41:19.722 "raid_level": "raid5f", 00:41:19.722 "superblock": false, 00:41:19.722 "num_base_bdevs": 4, 00:41:19.722 "num_base_bdevs_discovered": 1, 00:41:19.722 "num_base_bdevs_operational": 4, 00:41:19.722 "base_bdevs_list": [ 00:41:19.722 { 00:41:19.722 "name": "BaseBdev1", 00:41:19.722 "uuid": "e1fa3023-dd52-418d-91ce-93f1e4a35bdd", 00:41:19.722 "is_configured": true, 00:41:19.722 "data_offset": 0, 00:41:19.722 "data_size": 65536 00:41:19.722 }, 00:41:19.722 { 00:41:19.722 "name": "BaseBdev2", 00:41:19.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:19.722 "is_configured": false, 00:41:19.722 "data_offset": 0, 00:41:19.722 "data_size": 0 00:41:19.722 }, 00:41:19.722 { 00:41:19.722 "name": "BaseBdev3", 00:41:19.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:19.722 "is_configured": false, 00:41:19.722 "data_offset": 0, 00:41:19.722 "data_size": 0 00:41:19.722 }, 00:41:19.722 { 00:41:19.722 "name": "BaseBdev4", 00:41:19.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:19.722 "is_configured": false, 00:41:19.722 "data_offset": 0, 00:41:19.722 "data_size": 0 00:41:19.722 } 00:41:19.722 ] 00:41:19.722 }' 00:41:19.722 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:19.722 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.983 [2024-11-26 17:38:20.608896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:19.983 BaseBdev2 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.983 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.983 [ 00:41:19.983 { 00:41:19.983 "name": "BaseBdev2", 00:41:19.983 "aliases": [ 00:41:19.983 "f8d56181-4df8-49bf-b229-498f6c412cf0" 00:41:19.983 ], 00:41:19.983 "product_name": "Malloc disk", 00:41:19.983 "block_size": 512, 00:41:19.983 "num_blocks": 65536, 00:41:19.983 "uuid": "f8d56181-4df8-49bf-b229-498f6c412cf0", 00:41:19.983 "assigned_rate_limits": { 00:41:19.983 "rw_ios_per_sec": 0, 00:41:19.983 "rw_mbytes_per_sec": 0, 00:41:19.983 "r_mbytes_per_sec": 0, 00:41:19.983 "w_mbytes_per_sec": 0 00:41:19.983 }, 00:41:19.983 "claimed": true, 00:41:19.983 "claim_type": "exclusive_write", 00:41:19.983 "zoned": false, 00:41:19.983 "supported_io_types": { 00:41:19.983 "read": true, 00:41:19.983 "write": true, 00:41:19.983 "unmap": true, 00:41:19.983 "flush": true, 00:41:19.983 "reset": true, 00:41:19.983 "nvme_admin": false, 00:41:19.983 "nvme_io": false, 00:41:19.983 "nvme_io_md": false, 00:41:19.983 "write_zeroes": true, 00:41:19.983 "zcopy": true, 00:41:19.983 "get_zone_info": false, 00:41:19.983 "zone_management": false, 00:41:19.983 "zone_append": false, 00:41:19.984 "compare": false, 00:41:19.984 "compare_and_write": false, 00:41:19.984 "abort": true, 00:41:19.984 "seek_hole": false, 00:41:19.984 "seek_data": false, 00:41:19.984 "copy": true, 00:41:19.984 "nvme_iov_md": false 00:41:19.984 }, 00:41:19.984 "memory_domains": [ 00:41:19.984 { 00:41:19.984 "dma_device_id": "system", 00:41:19.984 "dma_device_type": 1 00:41:19.984 }, 00:41:19.984 { 00:41:19.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:19.984 "dma_device_type": 2 00:41:19.984 } 00:41:19.984 ], 00:41:19.984 "driver_specific": {} 00:41:19.984 } 00:41:19.984 ] 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:19.984 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.244 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:20.244 "name": "Existed_Raid", 00:41:20.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:20.244 "strip_size_kb": 64, 00:41:20.244 "state": "configuring", 00:41:20.244 "raid_level": "raid5f", 00:41:20.244 "superblock": false, 00:41:20.244 "num_base_bdevs": 4, 00:41:20.244 "num_base_bdevs_discovered": 2, 00:41:20.244 "num_base_bdevs_operational": 4, 00:41:20.244 "base_bdevs_list": [ 00:41:20.244 { 00:41:20.244 "name": "BaseBdev1", 00:41:20.244 "uuid": "e1fa3023-dd52-418d-91ce-93f1e4a35bdd", 00:41:20.244 "is_configured": true, 00:41:20.244 "data_offset": 0, 00:41:20.244 "data_size": 65536 00:41:20.244 }, 00:41:20.244 { 00:41:20.244 "name": "BaseBdev2", 00:41:20.244 "uuid": "f8d56181-4df8-49bf-b229-498f6c412cf0", 00:41:20.244 "is_configured": true, 00:41:20.244 "data_offset": 0, 00:41:20.244 "data_size": 65536 00:41:20.244 }, 00:41:20.244 { 00:41:20.244 "name": "BaseBdev3", 00:41:20.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:20.244 "is_configured": false, 00:41:20.244 "data_offset": 0, 00:41:20.244 "data_size": 0 00:41:20.244 }, 00:41:20.244 { 00:41:20.244 "name": "BaseBdev4", 00:41:20.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:20.244 "is_configured": false, 00:41:20.244 "data_offset": 0, 00:41:20.244 "data_size": 0 00:41:20.244 } 00:41:20.244 ] 00:41:20.244 }' 00:41:20.244 17:38:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:20.244 17:38:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:20.504 [2024-11-26 17:38:21.172188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:20.504 BaseBdev3 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.504 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:20.504 [ 00:41:20.504 { 00:41:20.504 "name": "BaseBdev3", 00:41:20.504 "aliases": [ 00:41:20.764 "da5af21d-a18c-43a1-b35c-5192861362e4" 00:41:20.764 ], 00:41:20.764 "product_name": "Malloc disk", 00:41:20.764 "block_size": 512, 00:41:20.764 "num_blocks": 65536, 00:41:20.764 "uuid": "da5af21d-a18c-43a1-b35c-5192861362e4", 00:41:20.764 "assigned_rate_limits": { 00:41:20.764 "rw_ios_per_sec": 0, 00:41:20.764 "rw_mbytes_per_sec": 0, 00:41:20.764 "r_mbytes_per_sec": 0, 00:41:20.764 "w_mbytes_per_sec": 0 00:41:20.764 }, 00:41:20.764 "claimed": true, 00:41:20.764 "claim_type": "exclusive_write", 00:41:20.764 "zoned": false, 00:41:20.764 "supported_io_types": { 00:41:20.764 "read": true, 00:41:20.764 "write": true, 00:41:20.764 "unmap": true, 00:41:20.764 "flush": true, 00:41:20.764 "reset": true, 00:41:20.764 "nvme_admin": false, 00:41:20.764 "nvme_io": false, 00:41:20.764 "nvme_io_md": false, 00:41:20.764 "write_zeroes": true, 00:41:20.764 "zcopy": true, 00:41:20.764 "get_zone_info": false, 00:41:20.764 "zone_management": false, 00:41:20.764 "zone_append": false, 00:41:20.764 "compare": false, 00:41:20.764 "compare_and_write": false, 00:41:20.764 "abort": true, 00:41:20.764 "seek_hole": false, 00:41:20.764 "seek_data": false, 00:41:20.764 "copy": true, 00:41:20.764 "nvme_iov_md": false 00:41:20.764 }, 00:41:20.764 "memory_domains": [ 00:41:20.764 { 00:41:20.764 "dma_device_id": "system", 00:41:20.764 "dma_device_type": 1 00:41:20.764 }, 00:41:20.764 { 00:41:20.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:20.764 "dma_device_type": 2 00:41:20.764 } 00:41:20.764 ], 00:41:20.764 "driver_specific": {} 00:41:20.764 } 00:41:20.764 ] 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.764 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:20.764 "name": "Existed_Raid", 00:41:20.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:20.764 "strip_size_kb": 64, 00:41:20.764 "state": "configuring", 00:41:20.764 "raid_level": "raid5f", 00:41:20.764 "superblock": false, 00:41:20.764 "num_base_bdevs": 4, 00:41:20.764 "num_base_bdevs_discovered": 3, 00:41:20.764 "num_base_bdevs_operational": 4, 00:41:20.764 "base_bdevs_list": [ 00:41:20.764 { 00:41:20.764 "name": "BaseBdev1", 00:41:20.764 "uuid": "e1fa3023-dd52-418d-91ce-93f1e4a35bdd", 00:41:20.764 "is_configured": true, 00:41:20.764 "data_offset": 0, 00:41:20.764 "data_size": 65536 00:41:20.764 }, 00:41:20.764 { 00:41:20.764 "name": "BaseBdev2", 00:41:20.764 "uuid": "f8d56181-4df8-49bf-b229-498f6c412cf0", 00:41:20.764 "is_configured": true, 00:41:20.764 "data_offset": 0, 00:41:20.764 "data_size": 65536 00:41:20.764 }, 00:41:20.764 { 00:41:20.764 "name": "BaseBdev3", 00:41:20.765 "uuid": "da5af21d-a18c-43a1-b35c-5192861362e4", 00:41:20.765 "is_configured": true, 00:41:20.765 "data_offset": 0, 00:41:20.765 "data_size": 65536 00:41:20.765 }, 00:41:20.765 { 00:41:20.765 "name": "BaseBdev4", 00:41:20.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:20.765 "is_configured": false, 00:41:20.765 "data_offset": 0, 00:41:20.765 "data_size": 0 00:41:20.765 } 00:41:20.765 ] 00:41:20.765 }' 00:41:20.765 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:20.765 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.025 [2024-11-26 17:38:21.625173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:21.025 [2024-11-26 17:38:21.625269] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:41:21.025 [2024-11-26 17:38:21.625280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:41:21.025 [2024-11-26 17:38:21.625615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:41:21.025 [2024-11-26 17:38:21.633097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:41:21.025 [2024-11-26 17:38:21.633125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:41:21.025 [2024-11-26 17:38:21.633451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:21.025 BaseBdev4 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.025 [ 00:41:21.025 { 00:41:21.025 "name": "BaseBdev4", 00:41:21.025 "aliases": [ 00:41:21.025 "d82e0c65-b295-4582-a2f8-43f3d1d0d20b" 00:41:21.025 ], 00:41:21.025 "product_name": "Malloc disk", 00:41:21.025 "block_size": 512, 00:41:21.025 "num_blocks": 65536, 00:41:21.025 "uuid": "d82e0c65-b295-4582-a2f8-43f3d1d0d20b", 00:41:21.025 "assigned_rate_limits": { 00:41:21.025 "rw_ios_per_sec": 0, 00:41:21.025 "rw_mbytes_per_sec": 0, 00:41:21.025 "r_mbytes_per_sec": 0, 00:41:21.025 "w_mbytes_per_sec": 0 00:41:21.025 }, 00:41:21.025 "claimed": true, 00:41:21.025 "claim_type": "exclusive_write", 00:41:21.025 "zoned": false, 00:41:21.025 "supported_io_types": { 00:41:21.025 "read": true, 00:41:21.025 "write": true, 00:41:21.025 "unmap": true, 00:41:21.025 "flush": true, 00:41:21.025 "reset": true, 00:41:21.025 "nvme_admin": false, 00:41:21.025 "nvme_io": false, 00:41:21.025 "nvme_io_md": false, 00:41:21.025 "write_zeroes": true, 00:41:21.025 "zcopy": true, 00:41:21.025 "get_zone_info": false, 00:41:21.025 "zone_management": false, 00:41:21.025 "zone_append": false, 00:41:21.025 "compare": false, 00:41:21.025 "compare_and_write": false, 00:41:21.025 "abort": true, 00:41:21.025 "seek_hole": false, 00:41:21.025 "seek_data": false, 00:41:21.025 "copy": true, 00:41:21.025 "nvme_iov_md": false 00:41:21.025 }, 00:41:21.025 "memory_domains": [ 00:41:21.025 { 00:41:21.025 "dma_device_id": "system", 00:41:21.025 "dma_device_type": 1 00:41:21.025 }, 00:41:21.025 { 00:41:21.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:21.025 "dma_device_type": 2 00:41:21.025 } 00:41:21.025 ], 00:41:21.025 "driver_specific": {} 00:41:21.025 } 00:41:21.025 ] 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.025 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.285 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:21.285 "name": "Existed_Raid", 00:41:21.285 "uuid": "1266f9dd-6e16-455c-abeb-314f310f86c9", 00:41:21.285 "strip_size_kb": 64, 00:41:21.285 "state": "online", 00:41:21.285 "raid_level": "raid5f", 00:41:21.285 "superblock": false, 00:41:21.285 "num_base_bdevs": 4, 00:41:21.285 "num_base_bdevs_discovered": 4, 00:41:21.285 "num_base_bdevs_operational": 4, 00:41:21.285 "base_bdevs_list": [ 00:41:21.285 { 00:41:21.285 "name": "BaseBdev1", 00:41:21.285 "uuid": "e1fa3023-dd52-418d-91ce-93f1e4a35bdd", 00:41:21.285 "is_configured": true, 00:41:21.285 "data_offset": 0, 00:41:21.285 "data_size": 65536 00:41:21.285 }, 00:41:21.285 { 00:41:21.285 "name": "BaseBdev2", 00:41:21.285 "uuid": "f8d56181-4df8-49bf-b229-498f6c412cf0", 00:41:21.285 "is_configured": true, 00:41:21.285 "data_offset": 0, 00:41:21.285 "data_size": 65536 00:41:21.285 }, 00:41:21.285 { 00:41:21.285 "name": "BaseBdev3", 00:41:21.285 "uuid": "da5af21d-a18c-43a1-b35c-5192861362e4", 00:41:21.285 "is_configured": true, 00:41:21.285 "data_offset": 0, 00:41:21.285 "data_size": 65536 00:41:21.285 }, 00:41:21.285 { 00:41:21.285 "name": "BaseBdev4", 00:41:21.285 "uuid": "d82e0c65-b295-4582-a2f8-43f3d1d0d20b", 00:41:21.285 "is_configured": true, 00:41:21.285 "data_offset": 0, 00:41:21.285 "data_size": 65536 00:41:21.285 } 00:41:21.285 ] 00:41:21.285 }' 00:41:21.285 17:38:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:21.285 17:38:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.545 [2024-11-26 17:38:22.114612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:21.545 "name": "Existed_Raid", 00:41:21.545 "aliases": [ 00:41:21.545 "1266f9dd-6e16-455c-abeb-314f310f86c9" 00:41:21.545 ], 00:41:21.545 "product_name": "Raid Volume", 00:41:21.545 "block_size": 512, 00:41:21.545 "num_blocks": 196608, 00:41:21.545 "uuid": "1266f9dd-6e16-455c-abeb-314f310f86c9", 00:41:21.545 "assigned_rate_limits": { 00:41:21.545 "rw_ios_per_sec": 0, 00:41:21.545 "rw_mbytes_per_sec": 0, 00:41:21.545 "r_mbytes_per_sec": 0, 00:41:21.545 "w_mbytes_per_sec": 0 00:41:21.545 }, 00:41:21.545 "claimed": false, 00:41:21.545 "zoned": false, 00:41:21.545 "supported_io_types": { 00:41:21.545 "read": true, 00:41:21.545 "write": true, 00:41:21.545 "unmap": false, 00:41:21.545 "flush": false, 00:41:21.545 "reset": true, 00:41:21.545 "nvme_admin": false, 00:41:21.545 "nvme_io": false, 00:41:21.545 "nvme_io_md": false, 00:41:21.545 "write_zeroes": true, 00:41:21.545 "zcopy": false, 00:41:21.545 "get_zone_info": false, 00:41:21.545 "zone_management": false, 00:41:21.545 "zone_append": false, 00:41:21.545 "compare": false, 00:41:21.545 "compare_and_write": false, 00:41:21.545 "abort": false, 00:41:21.545 "seek_hole": false, 00:41:21.545 "seek_data": false, 00:41:21.545 "copy": false, 00:41:21.545 "nvme_iov_md": false 00:41:21.545 }, 00:41:21.545 "driver_specific": { 00:41:21.545 "raid": { 00:41:21.545 "uuid": "1266f9dd-6e16-455c-abeb-314f310f86c9", 00:41:21.545 "strip_size_kb": 64, 00:41:21.545 "state": "online", 00:41:21.545 "raid_level": "raid5f", 00:41:21.545 "superblock": false, 00:41:21.545 "num_base_bdevs": 4, 00:41:21.545 "num_base_bdevs_discovered": 4, 00:41:21.545 "num_base_bdevs_operational": 4, 00:41:21.545 "base_bdevs_list": [ 00:41:21.545 { 00:41:21.545 "name": "BaseBdev1", 00:41:21.545 "uuid": "e1fa3023-dd52-418d-91ce-93f1e4a35bdd", 00:41:21.545 "is_configured": true, 00:41:21.545 "data_offset": 0, 00:41:21.545 "data_size": 65536 00:41:21.545 }, 00:41:21.545 { 00:41:21.545 "name": "BaseBdev2", 00:41:21.545 "uuid": "f8d56181-4df8-49bf-b229-498f6c412cf0", 00:41:21.545 "is_configured": true, 00:41:21.545 "data_offset": 0, 00:41:21.545 "data_size": 65536 00:41:21.545 }, 00:41:21.545 { 00:41:21.545 "name": "BaseBdev3", 00:41:21.545 "uuid": "da5af21d-a18c-43a1-b35c-5192861362e4", 00:41:21.545 "is_configured": true, 00:41:21.545 "data_offset": 0, 00:41:21.545 "data_size": 65536 00:41:21.545 }, 00:41:21.545 { 00:41:21.545 "name": "BaseBdev4", 00:41:21.545 "uuid": "d82e0c65-b295-4582-a2f8-43f3d1d0d20b", 00:41:21.545 "is_configured": true, 00:41:21.545 "data_offset": 0, 00:41:21.545 "data_size": 65536 00:41:21.545 } 00:41:21.545 ] 00:41:21.545 } 00:41:21.545 } 00:41:21.545 }' 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:41:21.545 BaseBdev2 00:41:21.545 BaseBdev3 00:41:21.545 BaseBdev4' 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.545 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.805 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:21.805 [2024-11-26 17:38:22.401916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.065 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:22.065 "name": "Existed_Raid", 00:41:22.065 "uuid": "1266f9dd-6e16-455c-abeb-314f310f86c9", 00:41:22.065 "strip_size_kb": 64, 00:41:22.065 "state": "online", 00:41:22.065 "raid_level": "raid5f", 00:41:22.065 "superblock": false, 00:41:22.065 "num_base_bdevs": 4, 00:41:22.065 "num_base_bdevs_discovered": 3, 00:41:22.065 "num_base_bdevs_operational": 3, 00:41:22.065 "base_bdevs_list": [ 00:41:22.065 { 00:41:22.065 "name": null, 00:41:22.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:22.065 "is_configured": false, 00:41:22.065 "data_offset": 0, 00:41:22.065 "data_size": 65536 00:41:22.065 }, 00:41:22.065 { 00:41:22.065 "name": "BaseBdev2", 00:41:22.065 "uuid": "f8d56181-4df8-49bf-b229-498f6c412cf0", 00:41:22.065 "is_configured": true, 00:41:22.065 "data_offset": 0, 00:41:22.065 "data_size": 65536 00:41:22.065 }, 00:41:22.065 { 00:41:22.065 "name": "BaseBdev3", 00:41:22.065 "uuid": "da5af21d-a18c-43a1-b35c-5192861362e4", 00:41:22.065 "is_configured": true, 00:41:22.065 "data_offset": 0, 00:41:22.065 "data_size": 65536 00:41:22.066 }, 00:41:22.066 { 00:41:22.066 "name": "BaseBdev4", 00:41:22.066 "uuid": "d82e0c65-b295-4582-a2f8-43f3d1d0d20b", 00:41:22.066 "is_configured": true, 00:41:22.066 "data_offset": 0, 00:41:22.066 "data_size": 65536 00:41:22.066 } 00:41:22.066 ] 00:41:22.066 }' 00:41:22.066 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:22.066 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:22.325 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:41:22.325 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:41:22.325 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:41:22.325 17:38:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:22.325 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.325 17:38:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:22.325 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:22.584 [2024-11-26 17:38:23.036716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:41:22.584 [2024-11-26 17:38:23.036840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:22.584 [2024-11-26 17:38:23.142487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.584 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:22.584 [2024-11-26 17:38:23.202370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:22.843 [2024-11-26 17:38:23.347336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:41:22.843 [2024-11-26 17:38:23.347398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.843 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.107 BaseBdev2 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.107 [ 00:41:23.107 { 00:41:23.107 "name": "BaseBdev2", 00:41:23.107 "aliases": [ 00:41:23.107 "1f5cc367-bf48-47c9-acbe-61108889e88e" 00:41:23.107 ], 00:41:23.107 "product_name": "Malloc disk", 00:41:23.107 "block_size": 512, 00:41:23.107 "num_blocks": 65536, 00:41:23.107 "uuid": "1f5cc367-bf48-47c9-acbe-61108889e88e", 00:41:23.107 "assigned_rate_limits": { 00:41:23.107 "rw_ios_per_sec": 0, 00:41:23.107 "rw_mbytes_per_sec": 0, 00:41:23.107 "r_mbytes_per_sec": 0, 00:41:23.107 "w_mbytes_per_sec": 0 00:41:23.107 }, 00:41:23.107 "claimed": false, 00:41:23.107 "zoned": false, 00:41:23.107 "supported_io_types": { 00:41:23.107 "read": true, 00:41:23.107 "write": true, 00:41:23.107 "unmap": true, 00:41:23.107 "flush": true, 00:41:23.107 "reset": true, 00:41:23.107 "nvme_admin": false, 00:41:23.107 "nvme_io": false, 00:41:23.107 "nvme_io_md": false, 00:41:23.107 "write_zeroes": true, 00:41:23.107 "zcopy": true, 00:41:23.107 "get_zone_info": false, 00:41:23.107 "zone_management": false, 00:41:23.107 "zone_append": false, 00:41:23.107 "compare": false, 00:41:23.107 "compare_and_write": false, 00:41:23.107 "abort": true, 00:41:23.107 "seek_hole": false, 00:41:23.107 "seek_data": false, 00:41:23.107 "copy": true, 00:41:23.107 "nvme_iov_md": false 00:41:23.107 }, 00:41:23.107 "memory_domains": [ 00:41:23.107 { 00:41:23.107 "dma_device_id": "system", 00:41:23.107 "dma_device_type": 1 00:41:23.107 }, 00:41:23.107 { 00:41:23.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:23.107 "dma_device_type": 2 00:41:23.107 } 00:41:23.107 ], 00:41:23.107 "driver_specific": {} 00:41:23.107 } 00:41:23.107 ] 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.107 BaseBdev3 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.107 [ 00:41:23.107 { 00:41:23.107 "name": "BaseBdev3", 00:41:23.107 "aliases": [ 00:41:23.107 "44ff6584-73b6-470e-831e-46e50f41e09d" 00:41:23.107 ], 00:41:23.107 "product_name": "Malloc disk", 00:41:23.107 "block_size": 512, 00:41:23.107 "num_blocks": 65536, 00:41:23.107 "uuid": "44ff6584-73b6-470e-831e-46e50f41e09d", 00:41:23.107 "assigned_rate_limits": { 00:41:23.107 "rw_ios_per_sec": 0, 00:41:23.107 "rw_mbytes_per_sec": 0, 00:41:23.107 "r_mbytes_per_sec": 0, 00:41:23.107 "w_mbytes_per_sec": 0 00:41:23.107 }, 00:41:23.107 "claimed": false, 00:41:23.107 "zoned": false, 00:41:23.107 "supported_io_types": { 00:41:23.107 "read": true, 00:41:23.107 "write": true, 00:41:23.107 "unmap": true, 00:41:23.107 "flush": true, 00:41:23.107 "reset": true, 00:41:23.107 "nvme_admin": false, 00:41:23.107 "nvme_io": false, 00:41:23.107 "nvme_io_md": false, 00:41:23.107 "write_zeroes": true, 00:41:23.107 "zcopy": true, 00:41:23.107 "get_zone_info": false, 00:41:23.107 "zone_management": false, 00:41:23.107 "zone_append": false, 00:41:23.107 "compare": false, 00:41:23.107 "compare_and_write": false, 00:41:23.107 "abort": true, 00:41:23.107 "seek_hole": false, 00:41:23.107 "seek_data": false, 00:41:23.107 "copy": true, 00:41:23.107 "nvme_iov_md": false 00:41:23.107 }, 00:41:23.107 "memory_domains": [ 00:41:23.107 { 00:41:23.107 "dma_device_id": "system", 00:41:23.107 "dma_device_type": 1 00:41:23.107 }, 00:41:23.107 { 00:41:23.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:23.107 "dma_device_type": 2 00:41:23.107 } 00:41:23.107 ], 00:41:23.107 "driver_specific": {} 00:41:23.107 } 00:41:23.107 ] 00:41:23.107 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.108 BaseBdev4 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.108 [ 00:41:23.108 { 00:41:23.108 "name": "BaseBdev4", 00:41:23.108 "aliases": [ 00:41:23.108 "f5890fd0-55d7-4091-81f2-f7275768382a" 00:41:23.108 ], 00:41:23.108 "product_name": "Malloc disk", 00:41:23.108 "block_size": 512, 00:41:23.108 "num_blocks": 65536, 00:41:23.108 "uuid": "f5890fd0-55d7-4091-81f2-f7275768382a", 00:41:23.108 "assigned_rate_limits": { 00:41:23.108 "rw_ios_per_sec": 0, 00:41:23.108 "rw_mbytes_per_sec": 0, 00:41:23.108 "r_mbytes_per_sec": 0, 00:41:23.108 "w_mbytes_per_sec": 0 00:41:23.108 }, 00:41:23.108 "claimed": false, 00:41:23.108 "zoned": false, 00:41:23.108 "supported_io_types": { 00:41:23.108 "read": true, 00:41:23.108 "write": true, 00:41:23.108 "unmap": true, 00:41:23.108 "flush": true, 00:41:23.108 "reset": true, 00:41:23.108 "nvme_admin": false, 00:41:23.108 "nvme_io": false, 00:41:23.108 "nvme_io_md": false, 00:41:23.108 "write_zeroes": true, 00:41:23.108 "zcopy": true, 00:41:23.108 "get_zone_info": false, 00:41:23.108 "zone_management": false, 00:41:23.108 "zone_append": false, 00:41:23.108 "compare": false, 00:41:23.108 "compare_and_write": false, 00:41:23.108 "abort": true, 00:41:23.108 "seek_hole": false, 00:41:23.108 "seek_data": false, 00:41:23.108 "copy": true, 00:41:23.108 "nvme_iov_md": false 00:41:23.108 }, 00:41:23.108 "memory_domains": [ 00:41:23.108 { 00:41:23.108 "dma_device_id": "system", 00:41:23.108 "dma_device_type": 1 00:41:23.108 }, 00:41:23.108 { 00:41:23.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:23.108 "dma_device_type": 2 00:41:23.108 } 00:41:23.108 ], 00:41:23.108 "driver_specific": {} 00:41:23.108 } 00:41:23.108 ] 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.108 [2024-11-26 17:38:23.753373] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:23.108 [2024-11-26 17:38:23.753426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:23.108 [2024-11-26 17:38:23.753451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:23.108 [2024-11-26 17:38:23.755694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:23.108 [2024-11-26 17:38:23.755749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.108 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.375 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:23.375 "name": "Existed_Raid", 00:41:23.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:23.375 "strip_size_kb": 64, 00:41:23.375 "state": "configuring", 00:41:23.375 "raid_level": "raid5f", 00:41:23.375 "superblock": false, 00:41:23.375 "num_base_bdevs": 4, 00:41:23.375 "num_base_bdevs_discovered": 3, 00:41:23.375 "num_base_bdevs_operational": 4, 00:41:23.375 "base_bdevs_list": [ 00:41:23.375 { 00:41:23.375 "name": "BaseBdev1", 00:41:23.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:23.375 "is_configured": false, 00:41:23.375 "data_offset": 0, 00:41:23.375 "data_size": 0 00:41:23.375 }, 00:41:23.375 { 00:41:23.375 "name": "BaseBdev2", 00:41:23.375 "uuid": "1f5cc367-bf48-47c9-acbe-61108889e88e", 00:41:23.375 "is_configured": true, 00:41:23.375 "data_offset": 0, 00:41:23.375 "data_size": 65536 00:41:23.375 }, 00:41:23.375 { 00:41:23.375 "name": "BaseBdev3", 00:41:23.375 "uuid": "44ff6584-73b6-470e-831e-46e50f41e09d", 00:41:23.375 "is_configured": true, 00:41:23.375 "data_offset": 0, 00:41:23.375 "data_size": 65536 00:41:23.375 }, 00:41:23.375 { 00:41:23.375 "name": "BaseBdev4", 00:41:23.375 "uuid": "f5890fd0-55d7-4091-81f2-f7275768382a", 00:41:23.375 "is_configured": true, 00:41:23.375 "data_offset": 0, 00:41:23.375 "data_size": 65536 00:41:23.375 } 00:41:23.375 ] 00:41:23.375 }' 00:41:23.375 17:38:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:23.375 17:38:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.635 [2024-11-26 17:38:24.172865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:23.635 "name": "Existed_Raid", 00:41:23.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:23.635 "strip_size_kb": 64, 00:41:23.635 "state": "configuring", 00:41:23.635 "raid_level": "raid5f", 00:41:23.635 "superblock": false, 00:41:23.635 "num_base_bdevs": 4, 00:41:23.635 "num_base_bdevs_discovered": 2, 00:41:23.635 "num_base_bdevs_operational": 4, 00:41:23.635 "base_bdevs_list": [ 00:41:23.635 { 00:41:23.635 "name": "BaseBdev1", 00:41:23.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:23.635 "is_configured": false, 00:41:23.635 "data_offset": 0, 00:41:23.635 "data_size": 0 00:41:23.635 }, 00:41:23.635 { 00:41:23.635 "name": null, 00:41:23.635 "uuid": "1f5cc367-bf48-47c9-acbe-61108889e88e", 00:41:23.635 "is_configured": false, 00:41:23.635 "data_offset": 0, 00:41:23.635 "data_size": 65536 00:41:23.635 }, 00:41:23.635 { 00:41:23.635 "name": "BaseBdev3", 00:41:23.635 "uuid": "44ff6584-73b6-470e-831e-46e50f41e09d", 00:41:23.635 "is_configured": true, 00:41:23.635 "data_offset": 0, 00:41:23.635 "data_size": 65536 00:41:23.635 }, 00:41:23.635 { 00:41:23.635 "name": "BaseBdev4", 00:41:23.635 "uuid": "f5890fd0-55d7-4091-81f2-f7275768382a", 00:41:23.635 "is_configured": true, 00:41:23.635 "data_offset": 0, 00:41:23.635 "data_size": 65536 00:41:23.635 } 00:41:23.635 ] 00:41:23.635 }' 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:23.635 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.208 [2024-11-26 17:38:24.720170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:24.208 BaseBdev1 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.208 [ 00:41:24.208 { 00:41:24.208 "name": "BaseBdev1", 00:41:24.208 "aliases": [ 00:41:24.208 "1911d140-5486-4d28-84b9-d362f8ade4f7" 00:41:24.208 ], 00:41:24.208 "product_name": "Malloc disk", 00:41:24.208 "block_size": 512, 00:41:24.208 "num_blocks": 65536, 00:41:24.208 "uuid": "1911d140-5486-4d28-84b9-d362f8ade4f7", 00:41:24.208 "assigned_rate_limits": { 00:41:24.208 "rw_ios_per_sec": 0, 00:41:24.208 "rw_mbytes_per_sec": 0, 00:41:24.208 "r_mbytes_per_sec": 0, 00:41:24.208 "w_mbytes_per_sec": 0 00:41:24.208 }, 00:41:24.208 "claimed": true, 00:41:24.208 "claim_type": "exclusive_write", 00:41:24.208 "zoned": false, 00:41:24.208 "supported_io_types": { 00:41:24.208 "read": true, 00:41:24.208 "write": true, 00:41:24.208 "unmap": true, 00:41:24.208 "flush": true, 00:41:24.208 "reset": true, 00:41:24.208 "nvme_admin": false, 00:41:24.208 "nvme_io": false, 00:41:24.208 "nvme_io_md": false, 00:41:24.208 "write_zeroes": true, 00:41:24.208 "zcopy": true, 00:41:24.208 "get_zone_info": false, 00:41:24.208 "zone_management": false, 00:41:24.208 "zone_append": false, 00:41:24.208 "compare": false, 00:41:24.208 "compare_and_write": false, 00:41:24.208 "abort": true, 00:41:24.208 "seek_hole": false, 00:41:24.208 "seek_data": false, 00:41:24.208 "copy": true, 00:41:24.208 "nvme_iov_md": false 00:41:24.208 }, 00:41:24.208 "memory_domains": [ 00:41:24.208 { 00:41:24.208 "dma_device_id": "system", 00:41:24.208 "dma_device_type": 1 00:41:24.208 }, 00:41:24.208 { 00:41:24.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:24.208 "dma_device_type": 2 00:41:24.208 } 00:41:24.208 ], 00:41:24.208 "driver_specific": {} 00:41:24.208 } 00:41:24.208 ] 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.208 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:24.208 "name": "Existed_Raid", 00:41:24.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:24.208 "strip_size_kb": 64, 00:41:24.208 "state": "configuring", 00:41:24.208 "raid_level": "raid5f", 00:41:24.208 "superblock": false, 00:41:24.208 "num_base_bdevs": 4, 00:41:24.208 "num_base_bdevs_discovered": 3, 00:41:24.208 "num_base_bdevs_operational": 4, 00:41:24.208 "base_bdevs_list": [ 00:41:24.208 { 00:41:24.208 "name": "BaseBdev1", 00:41:24.208 "uuid": "1911d140-5486-4d28-84b9-d362f8ade4f7", 00:41:24.208 "is_configured": true, 00:41:24.208 "data_offset": 0, 00:41:24.208 "data_size": 65536 00:41:24.208 }, 00:41:24.208 { 00:41:24.208 "name": null, 00:41:24.208 "uuid": "1f5cc367-bf48-47c9-acbe-61108889e88e", 00:41:24.208 "is_configured": false, 00:41:24.208 "data_offset": 0, 00:41:24.208 "data_size": 65536 00:41:24.208 }, 00:41:24.208 { 00:41:24.208 "name": "BaseBdev3", 00:41:24.208 "uuid": "44ff6584-73b6-470e-831e-46e50f41e09d", 00:41:24.208 "is_configured": true, 00:41:24.209 "data_offset": 0, 00:41:24.209 "data_size": 65536 00:41:24.209 }, 00:41:24.209 { 00:41:24.209 "name": "BaseBdev4", 00:41:24.209 "uuid": "f5890fd0-55d7-4091-81f2-f7275768382a", 00:41:24.209 "is_configured": true, 00:41:24.209 "data_offset": 0, 00:41:24.209 "data_size": 65536 00:41:24.209 } 00:41:24.209 ] 00:41:24.209 }' 00:41:24.209 17:38:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:24.209 17:38:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.468 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.468 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:41:24.468 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.468 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.468 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.727 [2024-11-26 17:38:25.191479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.727 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.728 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.728 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:24.728 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.728 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:24.728 "name": "Existed_Raid", 00:41:24.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:24.728 "strip_size_kb": 64, 00:41:24.728 "state": "configuring", 00:41:24.728 "raid_level": "raid5f", 00:41:24.728 "superblock": false, 00:41:24.728 "num_base_bdevs": 4, 00:41:24.728 "num_base_bdevs_discovered": 2, 00:41:24.728 "num_base_bdevs_operational": 4, 00:41:24.728 "base_bdevs_list": [ 00:41:24.728 { 00:41:24.728 "name": "BaseBdev1", 00:41:24.728 "uuid": "1911d140-5486-4d28-84b9-d362f8ade4f7", 00:41:24.728 "is_configured": true, 00:41:24.728 "data_offset": 0, 00:41:24.728 "data_size": 65536 00:41:24.728 }, 00:41:24.728 { 00:41:24.728 "name": null, 00:41:24.728 "uuid": "1f5cc367-bf48-47c9-acbe-61108889e88e", 00:41:24.728 "is_configured": false, 00:41:24.728 "data_offset": 0, 00:41:24.728 "data_size": 65536 00:41:24.728 }, 00:41:24.728 { 00:41:24.728 "name": null, 00:41:24.728 "uuid": "44ff6584-73b6-470e-831e-46e50f41e09d", 00:41:24.728 "is_configured": false, 00:41:24.728 "data_offset": 0, 00:41:24.728 "data_size": 65536 00:41:24.728 }, 00:41:24.728 { 00:41:24.728 "name": "BaseBdev4", 00:41:24.728 "uuid": "f5890fd0-55d7-4091-81f2-f7275768382a", 00:41:24.728 "is_configured": true, 00:41:24.728 "data_offset": 0, 00:41:24.728 "data_size": 65536 00:41:24.728 } 00:41:24.728 ] 00:41:24.728 }' 00:41:24.728 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:24.728 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:24.987 [2024-11-26 17:38:25.654639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:24.987 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:25.247 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.247 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:25.247 "name": "Existed_Raid", 00:41:25.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:25.247 "strip_size_kb": 64, 00:41:25.247 "state": "configuring", 00:41:25.247 "raid_level": "raid5f", 00:41:25.247 "superblock": false, 00:41:25.247 "num_base_bdevs": 4, 00:41:25.247 "num_base_bdevs_discovered": 3, 00:41:25.247 "num_base_bdevs_operational": 4, 00:41:25.247 "base_bdevs_list": [ 00:41:25.247 { 00:41:25.247 "name": "BaseBdev1", 00:41:25.247 "uuid": "1911d140-5486-4d28-84b9-d362f8ade4f7", 00:41:25.247 "is_configured": true, 00:41:25.247 "data_offset": 0, 00:41:25.247 "data_size": 65536 00:41:25.247 }, 00:41:25.247 { 00:41:25.247 "name": null, 00:41:25.247 "uuid": "1f5cc367-bf48-47c9-acbe-61108889e88e", 00:41:25.247 "is_configured": false, 00:41:25.247 "data_offset": 0, 00:41:25.247 "data_size": 65536 00:41:25.247 }, 00:41:25.247 { 00:41:25.247 "name": "BaseBdev3", 00:41:25.247 "uuid": "44ff6584-73b6-470e-831e-46e50f41e09d", 00:41:25.247 "is_configured": true, 00:41:25.247 "data_offset": 0, 00:41:25.247 "data_size": 65536 00:41:25.247 }, 00:41:25.247 { 00:41:25.247 "name": "BaseBdev4", 00:41:25.247 "uuid": "f5890fd0-55d7-4091-81f2-f7275768382a", 00:41:25.247 "is_configured": true, 00:41:25.247 "data_offset": 0, 00:41:25.247 "data_size": 65536 00:41:25.247 } 00:41:25.247 ] 00:41:25.247 }' 00:41:25.247 17:38:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:25.247 17:38:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:25.506 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:25.506 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:41:25.506 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.506 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:25.506 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.506 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:41:25.506 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:41:25.506 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.506 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:25.506 [2024-11-26 17:38:26.137871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.766 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:25.766 "name": "Existed_Raid", 00:41:25.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:25.766 "strip_size_kb": 64, 00:41:25.766 "state": "configuring", 00:41:25.766 "raid_level": "raid5f", 00:41:25.766 "superblock": false, 00:41:25.766 "num_base_bdevs": 4, 00:41:25.766 "num_base_bdevs_discovered": 2, 00:41:25.767 "num_base_bdevs_operational": 4, 00:41:25.767 "base_bdevs_list": [ 00:41:25.767 { 00:41:25.767 "name": null, 00:41:25.767 "uuid": "1911d140-5486-4d28-84b9-d362f8ade4f7", 00:41:25.767 "is_configured": false, 00:41:25.767 "data_offset": 0, 00:41:25.767 "data_size": 65536 00:41:25.767 }, 00:41:25.767 { 00:41:25.767 "name": null, 00:41:25.767 "uuid": "1f5cc367-bf48-47c9-acbe-61108889e88e", 00:41:25.767 "is_configured": false, 00:41:25.767 "data_offset": 0, 00:41:25.767 "data_size": 65536 00:41:25.767 }, 00:41:25.767 { 00:41:25.767 "name": "BaseBdev3", 00:41:25.767 "uuid": "44ff6584-73b6-470e-831e-46e50f41e09d", 00:41:25.767 "is_configured": true, 00:41:25.767 "data_offset": 0, 00:41:25.767 "data_size": 65536 00:41:25.767 }, 00:41:25.767 { 00:41:25.767 "name": "BaseBdev4", 00:41:25.767 "uuid": "f5890fd0-55d7-4091-81f2-f7275768382a", 00:41:25.767 "is_configured": true, 00:41:25.767 "data_offset": 0, 00:41:25.767 "data_size": 65536 00:41:25.767 } 00:41:25.767 ] 00:41:25.767 }' 00:41:25.767 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:25.767 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.026 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:26.026 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:41:26.026 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.026 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.026 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.286 [2024-11-26 17:38:26.748131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:26.286 "name": "Existed_Raid", 00:41:26.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:26.286 "strip_size_kb": 64, 00:41:26.286 "state": "configuring", 00:41:26.286 "raid_level": "raid5f", 00:41:26.286 "superblock": false, 00:41:26.286 "num_base_bdevs": 4, 00:41:26.286 "num_base_bdevs_discovered": 3, 00:41:26.286 "num_base_bdevs_operational": 4, 00:41:26.286 "base_bdevs_list": [ 00:41:26.286 { 00:41:26.286 "name": null, 00:41:26.286 "uuid": "1911d140-5486-4d28-84b9-d362f8ade4f7", 00:41:26.286 "is_configured": false, 00:41:26.286 "data_offset": 0, 00:41:26.286 "data_size": 65536 00:41:26.286 }, 00:41:26.286 { 00:41:26.286 "name": "BaseBdev2", 00:41:26.286 "uuid": "1f5cc367-bf48-47c9-acbe-61108889e88e", 00:41:26.286 "is_configured": true, 00:41:26.286 "data_offset": 0, 00:41:26.286 "data_size": 65536 00:41:26.286 }, 00:41:26.286 { 00:41:26.286 "name": "BaseBdev3", 00:41:26.286 "uuid": "44ff6584-73b6-470e-831e-46e50f41e09d", 00:41:26.286 "is_configured": true, 00:41:26.286 "data_offset": 0, 00:41:26.286 "data_size": 65536 00:41:26.286 }, 00:41:26.286 { 00:41:26.286 "name": "BaseBdev4", 00:41:26.286 "uuid": "f5890fd0-55d7-4091-81f2-f7275768382a", 00:41:26.286 "is_configured": true, 00:41:26.286 "data_offset": 0, 00:41:26.286 "data_size": 65536 00:41:26.286 } 00:41:26.286 ] 00:41:26.286 }' 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:26.286 17:38:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.546 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:26.546 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.546 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.546 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:41:26.546 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.546 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1911d140-5486-4d28-84b9-d362f8ade4f7 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.805 [2024-11-26 17:38:27.316040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:41:26.805 [2024-11-26 17:38:27.316107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:41:26.805 [2024-11-26 17:38:27.316116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:41:26.805 [2024-11-26 17:38:27.316397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:41:26.805 [2024-11-26 17:38:27.323939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:41:26.805 [2024-11-26 17:38:27.323968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:41:26.805 [2024-11-26 17:38:27.324269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:26.805 NewBaseBdev 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:26.805 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.806 [ 00:41:26.806 { 00:41:26.806 "name": "NewBaseBdev", 00:41:26.806 "aliases": [ 00:41:26.806 "1911d140-5486-4d28-84b9-d362f8ade4f7" 00:41:26.806 ], 00:41:26.806 "product_name": "Malloc disk", 00:41:26.806 "block_size": 512, 00:41:26.806 "num_blocks": 65536, 00:41:26.806 "uuid": "1911d140-5486-4d28-84b9-d362f8ade4f7", 00:41:26.806 "assigned_rate_limits": { 00:41:26.806 "rw_ios_per_sec": 0, 00:41:26.806 "rw_mbytes_per_sec": 0, 00:41:26.806 "r_mbytes_per_sec": 0, 00:41:26.806 "w_mbytes_per_sec": 0 00:41:26.806 }, 00:41:26.806 "claimed": true, 00:41:26.806 "claim_type": "exclusive_write", 00:41:26.806 "zoned": false, 00:41:26.806 "supported_io_types": { 00:41:26.806 "read": true, 00:41:26.806 "write": true, 00:41:26.806 "unmap": true, 00:41:26.806 "flush": true, 00:41:26.806 "reset": true, 00:41:26.806 "nvme_admin": false, 00:41:26.806 "nvme_io": false, 00:41:26.806 "nvme_io_md": false, 00:41:26.806 "write_zeroes": true, 00:41:26.806 "zcopy": true, 00:41:26.806 "get_zone_info": false, 00:41:26.806 "zone_management": false, 00:41:26.806 "zone_append": false, 00:41:26.806 "compare": false, 00:41:26.806 "compare_and_write": false, 00:41:26.806 "abort": true, 00:41:26.806 "seek_hole": false, 00:41:26.806 "seek_data": false, 00:41:26.806 "copy": true, 00:41:26.806 "nvme_iov_md": false 00:41:26.806 }, 00:41:26.806 "memory_domains": [ 00:41:26.806 { 00:41:26.806 "dma_device_id": "system", 00:41:26.806 "dma_device_type": 1 00:41:26.806 }, 00:41:26.806 { 00:41:26.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:26.806 "dma_device_type": 2 00:41:26.806 } 00:41:26.806 ], 00:41:26.806 "driver_specific": {} 00:41:26.806 } 00:41:26.806 ] 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:26.806 "name": "Existed_Raid", 00:41:26.806 "uuid": "088a6647-b03f-48b2-bf4b-539206f71fe0", 00:41:26.806 "strip_size_kb": 64, 00:41:26.806 "state": "online", 00:41:26.806 "raid_level": "raid5f", 00:41:26.806 "superblock": false, 00:41:26.806 "num_base_bdevs": 4, 00:41:26.806 "num_base_bdevs_discovered": 4, 00:41:26.806 "num_base_bdevs_operational": 4, 00:41:26.806 "base_bdevs_list": [ 00:41:26.806 { 00:41:26.806 "name": "NewBaseBdev", 00:41:26.806 "uuid": "1911d140-5486-4d28-84b9-d362f8ade4f7", 00:41:26.806 "is_configured": true, 00:41:26.806 "data_offset": 0, 00:41:26.806 "data_size": 65536 00:41:26.806 }, 00:41:26.806 { 00:41:26.806 "name": "BaseBdev2", 00:41:26.806 "uuid": "1f5cc367-bf48-47c9-acbe-61108889e88e", 00:41:26.806 "is_configured": true, 00:41:26.806 "data_offset": 0, 00:41:26.806 "data_size": 65536 00:41:26.806 }, 00:41:26.806 { 00:41:26.806 "name": "BaseBdev3", 00:41:26.806 "uuid": "44ff6584-73b6-470e-831e-46e50f41e09d", 00:41:26.806 "is_configured": true, 00:41:26.806 "data_offset": 0, 00:41:26.806 "data_size": 65536 00:41:26.806 }, 00:41:26.806 { 00:41:26.806 "name": "BaseBdev4", 00:41:26.806 "uuid": "f5890fd0-55d7-4091-81f2-f7275768382a", 00:41:26.806 "is_configured": true, 00:41:26.806 "data_offset": 0, 00:41:26.806 "data_size": 65536 00:41:26.806 } 00:41:26.806 ] 00:41:26.806 }' 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:26.806 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:27.375 [2024-11-26 17:38:27.829728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:27.375 "name": "Existed_Raid", 00:41:27.375 "aliases": [ 00:41:27.375 "088a6647-b03f-48b2-bf4b-539206f71fe0" 00:41:27.375 ], 00:41:27.375 "product_name": "Raid Volume", 00:41:27.375 "block_size": 512, 00:41:27.375 "num_blocks": 196608, 00:41:27.375 "uuid": "088a6647-b03f-48b2-bf4b-539206f71fe0", 00:41:27.375 "assigned_rate_limits": { 00:41:27.375 "rw_ios_per_sec": 0, 00:41:27.375 "rw_mbytes_per_sec": 0, 00:41:27.375 "r_mbytes_per_sec": 0, 00:41:27.375 "w_mbytes_per_sec": 0 00:41:27.375 }, 00:41:27.375 "claimed": false, 00:41:27.375 "zoned": false, 00:41:27.375 "supported_io_types": { 00:41:27.375 "read": true, 00:41:27.375 "write": true, 00:41:27.375 "unmap": false, 00:41:27.375 "flush": false, 00:41:27.375 "reset": true, 00:41:27.375 "nvme_admin": false, 00:41:27.375 "nvme_io": false, 00:41:27.375 "nvme_io_md": false, 00:41:27.375 "write_zeroes": true, 00:41:27.375 "zcopy": false, 00:41:27.375 "get_zone_info": false, 00:41:27.375 "zone_management": false, 00:41:27.375 "zone_append": false, 00:41:27.375 "compare": false, 00:41:27.375 "compare_and_write": false, 00:41:27.375 "abort": false, 00:41:27.375 "seek_hole": false, 00:41:27.375 "seek_data": false, 00:41:27.375 "copy": false, 00:41:27.375 "nvme_iov_md": false 00:41:27.375 }, 00:41:27.375 "driver_specific": { 00:41:27.375 "raid": { 00:41:27.375 "uuid": "088a6647-b03f-48b2-bf4b-539206f71fe0", 00:41:27.375 "strip_size_kb": 64, 00:41:27.375 "state": "online", 00:41:27.375 "raid_level": "raid5f", 00:41:27.375 "superblock": false, 00:41:27.375 "num_base_bdevs": 4, 00:41:27.375 "num_base_bdevs_discovered": 4, 00:41:27.375 "num_base_bdevs_operational": 4, 00:41:27.375 "base_bdevs_list": [ 00:41:27.375 { 00:41:27.375 "name": "NewBaseBdev", 00:41:27.375 "uuid": "1911d140-5486-4d28-84b9-d362f8ade4f7", 00:41:27.375 "is_configured": true, 00:41:27.375 "data_offset": 0, 00:41:27.375 "data_size": 65536 00:41:27.375 }, 00:41:27.375 { 00:41:27.375 "name": "BaseBdev2", 00:41:27.375 "uuid": "1f5cc367-bf48-47c9-acbe-61108889e88e", 00:41:27.375 "is_configured": true, 00:41:27.375 "data_offset": 0, 00:41:27.375 "data_size": 65536 00:41:27.375 }, 00:41:27.375 { 00:41:27.375 "name": "BaseBdev3", 00:41:27.375 "uuid": "44ff6584-73b6-470e-831e-46e50f41e09d", 00:41:27.375 "is_configured": true, 00:41:27.375 "data_offset": 0, 00:41:27.375 "data_size": 65536 00:41:27.375 }, 00:41:27.375 { 00:41:27.375 "name": "BaseBdev4", 00:41:27.375 "uuid": "f5890fd0-55d7-4091-81f2-f7275768382a", 00:41:27.375 "is_configured": true, 00:41:27.375 "data_offset": 0, 00:41:27.375 "data_size": 65536 00:41:27.375 } 00:41:27.375 ] 00:41:27.375 } 00:41:27.375 } 00:41:27.375 }' 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:41:27.375 BaseBdev2 00:41:27.375 BaseBdev3 00:41:27.375 BaseBdev4' 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:41:27.375 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:27.376 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:41:27.376 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.376 17:38:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:27.376 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:27.376 17:38:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:27.376 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:27.636 [2024-11-26 17:38:28.148814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:27.636 [2024-11-26 17:38:28.148852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:27.636 [2024-11-26 17:38:28.148943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:27.636 [2024-11-26 17:38:28.149285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:27.636 [2024-11-26 17:38:28.149297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83070 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83070 ']' 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83070 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83070 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83070' 00:41:27.636 killing process with pid 83070 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83070 00:41:27.636 [2024-11-26 17:38:28.195845] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:27.636 17:38:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83070 00:41:28.205 [2024-11-26 17:38:28.633817] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:41:29.587 00:41:29.587 real 0m11.686s 00:41:29.587 user 0m18.165s 00:41:29.587 sys 0m2.256s 00:41:29.587 ************************************ 00:41:29.587 END TEST raid5f_state_function_test 00:41:29.587 ************************************ 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:41:29.587 17:38:29 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:41:29.587 17:38:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:41:29.587 17:38:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:29.587 17:38:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:29.587 ************************************ 00:41:29.587 START TEST raid5f_state_function_test_sb 00:41:29.587 ************************************ 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83736 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83736' 00:41:29.587 Process raid pid: 83736 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83736 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83736 ']' 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:29.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:29.587 17:38:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:29.587 [2024-11-26 17:38:30.067554] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:41:29.587 [2024-11-26 17:38:30.067673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:29.587 [2024-11-26 17:38:30.247984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:29.852 [2024-11-26 17:38:30.392362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.111 [2024-11-26 17:38:30.639229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:30.111 [2024-11-26 17:38:30.639278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.372 [2024-11-26 17:38:30.913365] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:30.372 [2024-11-26 17:38:30.913437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:30.372 [2024-11-26 17:38:30.913448] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:30.372 [2024-11-26 17:38:30.913459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:30.372 [2024-11-26 17:38:30.913466] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:41:30.372 [2024-11-26 17:38:30.913477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:41:30.372 [2024-11-26 17:38:30.913483] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:41:30.372 [2024-11-26 17:38:30.913493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:30.372 "name": "Existed_Raid", 00:41:30.372 "uuid": "42b89ec2-e63b-41b4-a25c-1b9adb8bcf5e", 00:41:30.372 "strip_size_kb": 64, 00:41:30.372 "state": "configuring", 00:41:30.372 "raid_level": "raid5f", 00:41:30.372 "superblock": true, 00:41:30.372 "num_base_bdevs": 4, 00:41:30.372 "num_base_bdevs_discovered": 0, 00:41:30.372 "num_base_bdevs_operational": 4, 00:41:30.372 "base_bdevs_list": [ 00:41:30.372 { 00:41:30.372 "name": "BaseBdev1", 00:41:30.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.372 "is_configured": false, 00:41:30.372 "data_offset": 0, 00:41:30.372 "data_size": 0 00:41:30.372 }, 00:41:30.372 { 00:41:30.372 "name": "BaseBdev2", 00:41:30.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.372 "is_configured": false, 00:41:30.372 "data_offset": 0, 00:41:30.372 "data_size": 0 00:41:30.372 }, 00:41:30.372 { 00:41:30.372 "name": "BaseBdev3", 00:41:30.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.372 "is_configured": false, 00:41:30.372 "data_offset": 0, 00:41:30.372 "data_size": 0 00:41:30.372 }, 00:41:30.372 { 00:41:30.372 "name": "BaseBdev4", 00:41:30.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.372 "is_configured": false, 00:41:30.372 "data_offset": 0, 00:41:30.372 "data_size": 0 00:41:30.372 } 00:41:30.372 ] 00:41:30.372 }' 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:30.372 17:38:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.942 [2024-11-26 17:38:31.352664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:30.942 [2024-11-26 17:38:31.352795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.942 [2024-11-26 17:38:31.364650] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:30.942 [2024-11-26 17:38:31.364749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:30.942 [2024-11-26 17:38:31.364792] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:30.942 [2024-11-26 17:38:31.364816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:30.942 [2024-11-26 17:38:31.364833] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:41:30.942 [2024-11-26 17:38:31.364870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:41:30.942 [2024-11-26 17:38:31.364916] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:41:30.942 [2024-11-26 17:38:31.364939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.942 [2024-11-26 17:38:31.419760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:30.942 BaseBdev1 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.942 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.942 [ 00:41:30.942 { 00:41:30.942 "name": "BaseBdev1", 00:41:30.942 "aliases": [ 00:41:30.942 "e85097d7-f6a9-428c-913d-d22a72a36c8a" 00:41:30.942 ], 00:41:30.942 "product_name": "Malloc disk", 00:41:30.943 "block_size": 512, 00:41:30.943 "num_blocks": 65536, 00:41:30.943 "uuid": "e85097d7-f6a9-428c-913d-d22a72a36c8a", 00:41:30.943 "assigned_rate_limits": { 00:41:30.943 "rw_ios_per_sec": 0, 00:41:30.943 "rw_mbytes_per_sec": 0, 00:41:30.943 "r_mbytes_per_sec": 0, 00:41:30.943 "w_mbytes_per_sec": 0 00:41:30.943 }, 00:41:30.943 "claimed": true, 00:41:30.943 "claim_type": "exclusive_write", 00:41:30.943 "zoned": false, 00:41:30.943 "supported_io_types": { 00:41:30.943 "read": true, 00:41:30.943 "write": true, 00:41:30.943 "unmap": true, 00:41:30.943 "flush": true, 00:41:30.943 "reset": true, 00:41:30.943 "nvme_admin": false, 00:41:30.943 "nvme_io": false, 00:41:30.943 "nvme_io_md": false, 00:41:30.943 "write_zeroes": true, 00:41:30.943 "zcopy": true, 00:41:30.943 "get_zone_info": false, 00:41:30.943 "zone_management": false, 00:41:30.943 "zone_append": false, 00:41:30.943 "compare": false, 00:41:30.943 "compare_and_write": false, 00:41:30.943 "abort": true, 00:41:30.943 "seek_hole": false, 00:41:30.943 "seek_data": false, 00:41:30.943 "copy": true, 00:41:30.943 "nvme_iov_md": false 00:41:30.943 }, 00:41:30.943 "memory_domains": [ 00:41:30.943 { 00:41:30.943 "dma_device_id": "system", 00:41:30.943 "dma_device_type": 1 00:41:30.943 }, 00:41:30.943 { 00:41:30.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:30.943 "dma_device_type": 2 00:41:30.943 } 00:41:30.943 ], 00:41:30.943 "driver_specific": {} 00:41:30.943 } 00:41:30.943 ] 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:30.943 "name": "Existed_Raid", 00:41:30.943 "uuid": "976d01fd-20e7-413a-a302-b5775127925e", 00:41:30.943 "strip_size_kb": 64, 00:41:30.943 "state": "configuring", 00:41:30.943 "raid_level": "raid5f", 00:41:30.943 "superblock": true, 00:41:30.943 "num_base_bdevs": 4, 00:41:30.943 "num_base_bdevs_discovered": 1, 00:41:30.943 "num_base_bdevs_operational": 4, 00:41:30.943 "base_bdevs_list": [ 00:41:30.943 { 00:41:30.943 "name": "BaseBdev1", 00:41:30.943 "uuid": "e85097d7-f6a9-428c-913d-d22a72a36c8a", 00:41:30.943 "is_configured": true, 00:41:30.943 "data_offset": 2048, 00:41:30.943 "data_size": 63488 00:41:30.943 }, 00:41:30.943 { 00:41:30.943 "name": "BaseBdev2", 00:41:30.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.943 "is_configured": false, 00:41:30.943 "data_offset": 0, 00:41:30.943 "data_size": 0 00:41:30.943 }, 00:41:30.943 { 00:41:30.943 "name": "BaseBdev3", 00:41:30.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.943 "is_configured": false, 00:41:30.943 "data_offset": 0, 00:41:30.943 "data_size": 0 00:41:30.943 }, 00:41:30.943 { 00:41:30.943 "name": "BaseBdev4", 00:41:30.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:30.943 "is_configured": false, 00:41:30.943 "data_offset": 0, 00:41:30.943 "data_size": 0 00:41:30.943 } 00:41:30.943 ] 00:41:30.943 }' 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:30.943 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:31.512 [2024-11-26 17:38:31.930973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:31.512 [2024-11-26 17:38:31.931124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:31.512 [2024-11-26 17:38:31.943002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:31.512 [2024-11-26 17:38:31.945253] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:41:31.512 [2024-11-26 17:38:31.945298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:41:31.512 [2024-11-26 17:38:31.945308] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:41:31.512 [2024-11-26 17:38:31.945319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:41:31.512 [2024-11-26 17:38:31.945325] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:41:31.512 [2024-11-26 17:38:31.945334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:31.512 17:38:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.512 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:31.512 "name": "Existed_Raid", 00:41:31.512 "uuid": "5105a505-469e-4d01-846e-480ba8fd2240", 00:41:31.512 "strip_size_kb": 64, 00:41:31.512 "state": "configuring", 00:41:31.512 "raid_level": "raid5f", 00:41:31.512 "superblock": true, 00:41:31.513 "num_base_bdevs": 4, 00:41:31.513 "num_base_bdevs_discovered": 1, 00:41:31.513 "num_base_bdevs_operational": 4, 00:41:31.513 "base_bdevs_list": [ 00:41:31.513 { 00:41:31.513 "name": "BaseBdev1", 00:41:31.513 "uuid": "e85097d7-f6a9-428c-913d-d22a72a36c8a", 00:41:31.513 "is_configured": true, 00:41:31.513 "data_offset": 2048, 00:41:31.513 "data_size": 63488 00:41:31.513 }, 00:41:31.513 { 00:41:31.513 "name": "BaseBdev2", 00:41:31.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:31.513 "is_configured": false, 00:41:31.513 "data_offset": 0, 00:41:31.513 "data_size": 0 00:41:31.513 }, 00:41:31.513 { 00:41:31.513 "name": "BaseBdev3", 00:41:31.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:31.513 "is_configured": false, 00:41:31.513 "data_offset": 0, 00:41:31.513 "data_size": 0 00:41:31.513 }, 00:41:31.513 { 00:41:31.513 "name": "BaseBdev4", 00:41:31.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:31.513 "is_configured": false, 00:41:31.513 "data_offset": 0, 00:41:31.513 "data_size": 0 00:41:31.513 } 00:41:31.513 ] 00:41:31.513 }' 00:41:31.513 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:31.513 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:31.772 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:41:31.772 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.772 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.032 [2024-11-26 17:38:32.495195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:32.032 BaseBdev2 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.032 [ 00:41:32.032 { 00:41:32.032 "name": "BaseBdev2", 00:41:32.032 "aliases": [ 00:41:32.032 "637e7307-363e-4697-80e2-391f284c0946" 00:41:32.032 ], 00:41:32.032 "product_name": "Malloc disk", 00:41:32.032 "block_size": 512, 00:41:32.032 "num_blocks": 65536, 00:41:32.032 "uuid": "637e7307-363e-4697-80e2-391f284c0946", 00:41:32.032 "assigned_rate_limits": { 00:41:32.032 "rw_ios_per_sec": 0, 00:41:32.032 "rw_mbytes_per_sec": 0, 00:41:32.032 "r_mbytes_per_sec": 0, 00:41:32.032 "w_mbytes_per_sec": 0 00:41:32.032 }, 00:41:32.032 "claimed": true, 00:41:32.032 "claim_type": "exclusive_write", 00:41:32.032 "zoned": false, 00:41:32.032 "supported_io_types": { 00:41:32.032 "read": true, 00:41:32.032 "write": true, 00:41:32.032 "unmap": true, 00:41:32.032 "flush": true, 00:41:32.032 "reset": true, 00:41:32.032 "nvme_admin": false, 00:41:32.032 "nvme_io": false, 00:41:32.032 "nvme_io_md": false, 00:41:32.032 "write_zeroes": true, 00:41:32.032 "zcopy": true, 00:41:32.032 "get_zone_info": false, 00:41:32.032 "zone_management": false, 00:41:32.032 "zone_append": false, 00:41:32.032 "compare": false, 00:41:32.032 "compare_and_write": false, 00:41:32.032 "abort": true, 00:41:32.032 "seek_hole": false, 00:41:32.032 "seek_data": false, 00:41:32.032 "copy": true, 00:41:32.032 "nvme_iov_md": false 00:41:32.032 }, 00:41:32.032 "memory_domains": [ 00:41:32.032 { 00:41:32.032 "dma_device_id": "system", 00:41:32.032 "dma_device_type": 1 00:41:32.032 }, 00:41:32.032 { 00:41:32.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:32.032 "dma_device_type": 2 00:41:32.032 } 00:41:32.032 ], 00:41:32.032 "driver_specific": {} 00:41:32.032 } 00:41:32.032 ] 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:32.032 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:32.033 "name": "Existed_Raid", 00:41:32.033 "uuid": "5105a505-469e-4d01-846e-480ba8fd2240", 00:41:32.033 "strip_size_kb": 64, 00:41:32.033 "state": "configuring", 00:41:32.033 "raid_level": "raid5f", 00:41:32.033 "superblock": true, 00:41:32.033 "num_base_bdevs": 4, 00:41:32.033 "num_base_bdevs_discovered": 2, 00:41:32.033 "num_base_bdevs_operational": 4, 00:41:32.033 "base_bdevs_list": [ 00:41:32.033 { 00:41:32.033 "name": "BaseBdev1", 00:41:32.033 "uuid": "e85097d7-f6a9-428c-913d-d22a72a36c8a", 00:41:32.033 "is_configured": true, 00:41:32.033 "data_offset": 2048, 00:41:32.033 "data_size": 63488 00:41:32.033 }, 00:41:32.033 { 00:41:32.033 "name": "BaseBdev2", 00:41:32.033 "uuid": "637e7307-363e-4697-80e2-391f284c0946", 00:41:32.033 "is_configured": true, 00:41:32.033 "data_offset": 2048, 00:41:32.033 "data_size": 63488 00:41:32.033 }, 00:41:32.033 { 00:41:32.033 "name": "BaseBdev3", 00:41:32.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:32.033 "is_configured": false, 00:41:32.033 "data_offset": 0, 00:41:32.033 "data_size": 0 00:41:32.033 }, 00:41:32.033 { 00:41:32.033 "name": "BaseBdev4", 00:41:32.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:32.033 "is_configured": false, 00:41:32.033 "data_offset": 0, 00:41:32.033 "data_size": 0 00:41:32.033 } 00:41:32.033 ] 00:41:32.033 }' 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:32.033 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.292 17:38:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:41:32.292 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.292 17:38:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.551 [2024-11-26 17:38:33.029191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:32.551 BaseBdev3 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.551 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.551 [ 00:41:32.551 { 00:41:32.551 "name": "BaseBdev3", 00:41:32.551 "aliases": [ 00:41:32.551 "b74cea92-abd7-4aa4-a638-efd69bf26612" 00:41:32.551 ], 00:41:32.551 "product_name": "Malloc disk", 00:41:32.551 "block_size": 512, 00:41:32.551 "num_blocks": 65536, 00:41:32.551 "uuid": "b74cea92-abd7-4aa4-a638-efd69bf26612", 00:41:32.551 "assigned_rate_limits": { 00:41:32.551 "rw_ios_per_sec": 0, 00:41:32.551 "rw_mbytes_per_sec": 0, 00:41:32.551 "r_mbytes_per_sec": 0, 00:41:32.551 "w_mbytes_per_sec": 0 00:41:32.551 }, 00:41:32.551 "claimed": true, 00:41:32.551 "claim_type": "exclusive_write", 00:41:32.552 "zoned": false, 00:41:32.552 "supported_io_types": { 00:41:32.552 "read": true, 00:41:32.552 "write": true, 00:41:32.552 "unmap": true, 00:41:32.552 "flush": true, 00:41:32.552 "reset": true, 00:41:32.552 "nvme_admin": false, 00:41:32.552 "nvme_io": false, 00:41:32.552 "nvme_io_md": false, 00:41:32.552 "write_zeroes": true, 00:41:32.552 "zcopy": true, 00:41:32.552 "get_zone_info": false, 00:41:32.552 "zone_management": false, 00:41:32.552 "zone_append": false, 00:41:32.552 "compare": false, 00:41:32.552 "compare_and_write": false, 00:41:32.552 "abort": true, 00:41:32.552 "seek_hole": false, 00:41:32.552 "seek_data": false, 00:41:32.552 "copy": true, 00:41:32.552 "nvme_iov_md": false 00:41:32.552 }, 00:41:32.552 "memory_domains": [ 00:41:32.552 { 00:41:32.552 "dma_device_id": "system", 00:41:32.552 "dma_device_type": 1 00:41:32.552 }, 00:41:32.552 { 00:41:32.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:32.552 "dma_device_type": 2 00:41:32.552 } 00:41:32.552 ], 00:41:32.552 "driver_specific": {} 00:41:32.552 } 00:41:32.552 ] 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:32.552 "name": "Existed_Raid", 00:41:32.552 "uuid": "5105a505-469e-4d01-846e-480ba8fd2240", 00:41:32.552 "strip_size_kb": 64, 00:41:32.552 "state": "configuring", 00:41:32.552 "raid_level": "raid5f", 00:41:32.552 "superblock": true, 00:41:32.552 "num_base_bdevs": 4, 00:41:32.552 "num_base_bdevs_discovered": 3, 00:41:32.552 "num_base_bdevs_operational": 4, 00:41:32.552 "base_bdevs_list": [ 00:41:32.552 { 00:41:32.552 "name": "BaseBdev1", 00:41:32.552 "uuid": "e85097d7-f6a9-428c-913d-d22a72a36c8a", 00:41:32.552 "is_configured": true, 00:41:32.552 "data_offset": 2048, 00:41:32.552 "data_size": 63488 00:41:32.552 }, 00:41:32.552 { 00:41:32.552 "name": "BaseBdev2", 00:41:32.552 "uuid": "637e7307-363e-4697-80e2-391f284c0946", 00:41:32.552 "is_configured": true, 00:41:32.552 "data_offset": 2048, 00:41:32.552 "data_size": 63488 00:41:32.552 }, 00:41:32.552 { 00:41:32.552 "name": "BaseBdev3", 00:41:32.552 "uuid": "b74cea92-abd7-4aa4-a638-efd69bf26612", 00:41:32.552 "is_configured": true, 00:41:32.552 "data_offset": 2048, 00:41:32.552 "data_size": 63488 00:41:32.552 }, 00:41:32.552 { 00:41:32.552 "name": "BaseBdev4", 00:41:32.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:32.552 "is_configured": false, 00:41:32.552 "data_offset": 0, 00:41:32.552 "data_size": 0 00:41:32.552 } 00:41:32.552 ] 00:41:32.552 }' 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:32.552 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.121 [2024-11-26 17:38:33.569178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:33.121 [2024-11-26 17:38:33.569641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:41:33.121 [2024-11-26 17:38:33.569700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:33.121 [2024-11-26 17:38:33.570031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:41:33.121 BaseBdev4 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.121 [2024-11-26 17:38:33.577647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:41:33.121 [2024-11-26 17:38:33.577713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:41:33.121 [2024-11-26 17:38:33.578064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.121 [ 00:41:33.121 { 00:41:33.121 "name": "BaseBdev4", 00:41:33.121 "aliases": [ 00:41:33.121 "bb86b6c2-7750-4283-9360-7e8174538285" 00:41:33.121 ], 00:41:33.121 "product_name": "Malloc disk", 00:41:33.121 "block_size": 512, 00:41:33.121 "num_blocks": 65536, 00:41:33.121 "uuid": "bb86b6c2-7750-4283-9360-7e8174538285", 00:41:33.121 "assigned_rate_limits": { 00:41:33.121 "rw_ios_per_sec": 0, 00:41:33.121 "rw_mbytes_per_sec": 0, 00:41:33.121 "r_mbytes_per_sec": 0, 00:41:33.121 "w_mbytes_per_sec": 0 00:41:33.121 }, 00:41:33.121 "claimed": true, 00:41:33.121 "claim_type": "exclusive_write", 00:41:33.121 "zoned": false, 00:41:33.121 "supported_io_types": { 00:41:33.121 "read": true, 00:41:33.121 "write": true, 00:41:33.121 "unmap": true, 00:41:33.121 "flush": true, 00:41:33.121 "reset": true, 00:41:33.121 "nvme_admin": false, 00:41:33.121 "nvme_io": false, 00:41:33.121 "nvme_io_md": false, 00:41:33.121 "write_zeroes": true, 00:41:33.121 "zcopy": true, 00:41:33.121 "get_zone_info": false, 00:41:33.121 "zone_management": false, 00:41:33.121 "zone_append": false, 00:41:33.121 "compare": false, 00:41:33.121 "compare_and_write": false, 00:41:33.121 "abort": true, 00:41:33.121 "seek_hole": false, 00:41:33.121 "seek_data": false, 00:41:33.121 "copy": true, 00:41:33.121 "nvme_iov_md": false 00:41:33.121 }, 00:41:33.121 "memory_domains": [ 00:41:33.121 { 00:41:33.121 "dma_device_id": "system", 00:41:33.121 "dma_device_type": 1 00:41:33.121 }, 00:41:33.121 { 00:41:33.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:33.121 "dma_device_type": 2 00:41:33.121 } 00:41:33.121 ], 00:41:33.121 "driver_specific": {} 00:41:33.121 } 00:41:33.121 ] 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:33.121 "name": "Existed_Raid", 00:41:33.121 "uuid": "5105a505-469e-4d01-846e-480ba8fd2240", 00:41:33.121 "strip_size_kb": 64, 00:41:33.121 "state": "online", 00:41:33.121 "raid_level": "raid5f", 00:41:33.121 "superblock": true, 00:41:33.121 "num_base_bdevs": 4, 00:41:33.121 "num_base_bdevs_discovered": 4, 00:41:33.121 "num_base_bdevs_operational": 4, 00:41:33.121 "base_bdevs_list": [ 00:41:33.121 { 00:41:33.121 "name": "BaseBdev1", 00:41:33.121 "uuid": "e85097d7-f6a9-428c-913d-d22a72a36c8a", 00:41:33.121 "is_configured": true, 00:41:33.121 "data_offset": 2048, 00:41:33.121 "data_size": 63488 00:41:33.121 }, 00:41:33.121 { 00:41:33.121 "name": "BaseBdev2", 00:41:33.121 "uuid": "637e7307-363e-4697-80e2-391f284c0946", 00:41:33.121 "is_configured": true, 00:41:33.121 "data_offset": 2048, 00:41:33.121 "data_size": 63488 00:41:33.121 }, 00:41:33.121 { 00:41:33.121 "name": "BaseBdev3", 00:41:33.121 "uuid": "b74cea92-abd7-4aa4-a638-efd69bf26612", 00:41:33.121 "is_configured": true, 00:41:33.121 "data_offset": 2048, 00:41:33.121 "data_size": 63488 00:41:33.121 }, 00:41:33.121 { 00:41:33.121 "name": "BaseBdev4", 00:41:33.121 "uuid": "bb86b6c2-7750-4283-9360-7e8174538285", 00:41:33.121 "is_configured": true, 00:41:33.121 "data_offset": 2048, 00:41:33.121 "data_size": 63488 00:41:33.121 } 00:41:33.121 ] 00:41:33.121 }' 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:33.121 17:38:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.381 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:41:33.381 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:41:33.381 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:41:33.381 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:41:33.381 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:41:33.381 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:41:33.381 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:41:33.381 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:41:33.381 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.381 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.381 [2024-11-26 17:38:34.066910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:33.641 "name": "Existed_Raid", 00:41:33.641 "aliases": [ 00:41:33.641 "5105a505-469e-4d01-846e-480ba8fd2240" 00:41:33.641 ], 00:41:33.641 "product_name": "Raid Volume", 00:41:33.641 "block_size": 512, 00:41:33.641 "num_blocks": 190464, 00:41:33.641 "uuid": "5105a505-469e-4d01-846e-480ba8fd2240", 00:41:33.641 "assigned_rate_limits": { 00:41:33.641 "rw_ios_per_sec": 0, 00:41:33.641 "rw_mbytes_per_sec": 0, 00:41:33.641 "r_mbytes_per_sec": 0, 00:41:33.641 "w_mbytes_per_sec": 0 00:41:33.641 }, 00:41:33.641 "claimed": false, 00:41:33.641 "zoned": false, 00:41:33.641 "supported_io_types": { 00:41:33.641 "read": true, 00:41:33.641 "write": true, 00:41:33.641 "unmap": false, 00:41:33.641 "flush": false, 00:41:33.641 "reset": true, 00:41:33.641 "nvme_admin": false, 00:41:33.641 "nvme_io": false, 00:41:33.641 "nvme_io_md": false, 00:41:33.641 "write_zeroes": true, 00:41:33.641 "zcopy": false, 00:41:33.641 "get_zone_info": false, 00:41:33.641 "zone_management": false, 00:41:33.641 "zone_append": false, 00:41:33.641 "compare": false, 00:41:33.641 "compare_and_write": false, 00:41:33.641 "abort": false, 00:41:33.641 "seek_hole": false, 00:41:33.641 "seek_data": false, 00:41:33.641 "copy": false, 00:41:33.641 "nvme_iov_md": false 00:41:33.641 }, 00:41:33.641 "driver_specific": { 00:41:33.641 "raid": { 00:41:33.641 "uuid": "5105a505-469e-4d01-846e-480ba8fd2240", 00:41:33.641 "strip_size_kb": 64, 00:41:33.641 "state": "online", 00:41:33.641 "raid_level": "raid5f", 00:41:33.641 "superblock": true, 00:41:33.641 "num_base_bdevs": 4, 00:41:33.641 "num_base_bdevs_discovered": 4, 00:41:33.641 "num_base_bdevs_operational": 4, 00:41:33.641 "base_bdevs_list": [ 00:41:33.641 { 00:41:33.641 "name": "BaseBdev1", 00:41:33.641 "uuid": "e85097d7-f6a9-428c-913d-d22a72a36c8a", 00:41:33.641 "is_configured": true, 00:41:33.641 "data_offset": 2048, 00:41:33.641 "data_size": 63488 00:41:33.641 }, 00:41:33.641 { 00:41:33.641 "name": "BaseBdev2", 00:41:33.641 "uuid": "637e7307-363e-4697-80e2-391f284c0946", 00:41:33.641 "is_configured": true, 00:41:33.641 "data_offset": 2048, 00:41:33.641 "data_size": 63488 00:41:33.641 }, 00:41:33.641 { 00:41:33.641 "name": "BaseBdev3", 00:41:33.641 "uuid": "b74cea92-abd7-4aa4-a638-efd69bf26612", 00:41:33.641 "is_configured": true, 00:41:33.641 "data_offset": 2048, 00:41:33.641 "data_size": 63488 00:41:33.641 }, 00:41:33.641 { 00:41:33.641 "name": "BaseBdev4", 00:41:33.641 "uuid": "bb86b6c2-7750-4283-9360-7e8174538285", 00:41:33.641 "is_configured": true, 00:41:33.641 "data_offset": 2048, 00:41:33.641 "data_size": 63488 00:41:33.641 } 00:41:33.641 ] 00:41:33.641 } 00:41:33.641 } 00:41:33.641 }' 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:41:33.641 BaseBdev2 00:41:33.641 BaseBdev3 00:41:33.641 BaseBdev4' 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.641 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.902 [2024-11-26 17:38:34.390122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.902 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:33.902 "name": "Existed_Raid", 00:41:33.902 "uuid": "5105a505-469e-4d01-846e-480ba8fd2240", 00:41:33.902 "strip_size_kb": 64, 00:41:33.902 "state": "online", 00:41:33.902 "raid_level": "raid5f", 00:41:33.902 "superblock": true, 00:41:33.902 "num_base_bdevs": 4, 00:41:33.902 "num_base_bdevs_discovered": 3, 00:41:33.902 "num_base_bdevs_operational": 3, 00:41:33.902 "base_bdevs_list": [ 00:41:33.902 { 00:41:33.902 "name": null, 00:41:33.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:33.902 "is_configured": false, 00:41:33.902 "data_offset": 0, 00:41:33.902 "data_size": 63488 00:41:33.902 }, 00:41:33.902 { 00:41:33.902 "name": "BaseBdev2", 00:41:33.902 "uuid": "637e7307-363e-4697-80e2-391f284c0946", 00:41:33.902 "is_configured": true, 00:41:33.902 "data_offset": 2048, 00:41:33.902 "data_size": 63488 00:41:33.902 }, 00:41:33.902 { 00:41:33.903 "name": "BaseBdev3", 00:41:33.903 "uuid": "b74cea92-abd7-4aa4-a638-efd69bf26612", 00:41:33.903 "is_configured": true, 00:41:33.903 "data_offset": 2048, 00:41:33.903 "data_size": 63488 00:41:33.903 }, 00:41:33.903 { 00:41:33.903 "name": "BaseBdev4", 00:41:33.903 "uuid": "bb86b6c2-7750-4283-9360-7e8174538285", 00:41:33.903 "is_configured": true, 00:41:33.903 "data_offset": 2048, 00:41:33.903 "data_size": 63488 00:41:33.903 } 00:41:33.903 ] 00:41:33.903 }' 00:41:33.903 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:33.903 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.472 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:41:34.472 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:41:34.472 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:34.472 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.472 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.472 17:38:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:41:34.472 17:38:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.472 [2024-11-26 17:38:35.024203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:41:34.472 [2024-11-26 17:38:35.024508] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:34.472 [2024-11-26 17:38:35.133495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.472 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.732 [2024-11-26 17:38:35.197397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.732 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.732 [2024-11-26 17:38:35.359168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:41:34.732 [2024-11-26 17:38:35.359281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:41:34.991 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.991 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.992 BaseBdev2 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.992 [ 00:41:34.992 { 00:41:34.992 "name": "BaseBdev2", 00:41:34.992 "aliases": [ 00:41:34.992 "c9b14856-fd96-4659-a893-ecc329b143dd" 00:41:34.992 ], 00:41:34.992 "product_name": "Malloc disk", 00:41:34.992 "block_size": 512, 00:41:34.992 "num_blocks": 65536, 00:41:34.992 "uuid": "c9b14856-fd96-4659-a893-ecc329b143dd", 00:41:34.992 "assigned_rate_limits": { 00:41:34.992 "rw_ios_per_sec": 0, 00:41:34.992 "rw_mbytes_per_sec": 0, 00:41:34.992 "r_mbytes_per_sec": 0, 00:41:34.992 "w_mbytes_per_sec": 0 00:41:34.992 }, 00:41:34.992 "claimed": false, 00:41:34.992 "zoned": false, 00:41:34.992 "supported_io_types": { 00:41:34.992 "read": true, 00:41:34.992 "write": true, 00:41:34.992 "unmap": true, 00:41:34.992 "flush": true, 00:41:34.992 "reset": true, 00:41:34.992 "nvme_admin": false, 00:41:34.992 "nvme_io": false, 00:41:34.992 "nvme_io_md": false, 00:41:34.992 "write_zeroes": true, 00:41:34.992 "zcopy": true, 00:41:34.992 "get_zone_info": false, 00:41:34.992 "zone_management": false, 00:41:34.992 "zone_append": false, 00:41:34.992 "compare": false, 00:41:34.992 "compare_and_write": false, 00:41:34.992 "abort": true, 00:41:34.992 "seek_hole": false, 00:41:34.992 "seek_data": false, 00:41:34.992 "copy": true, 00:41:34.992 "nvme_iov_md": false 00:41:34.992 }, 00:41:34.992 "memory_domains": [ 00:41:34.992 { 00:41:34.992 "dma_device_id": "system", 00:41:34.992 "dma_device_type": 1 00:41:34.992 }, 00:41:34.992 { 00:41:34.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:34.992 "dma_device_type": 2 00:41:34.992 } 00:41:34.992 ], 00:41:34.992 "driver_specific": {} 00:41:34.992 } 00:41:34.992 ] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.992 BaseBdev3 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:34.992 [ 00:41:34.992 { 00:41:34.992 "name": "BaseBdev3", 00:41:34.992 "aliases": [ 00:41:34.992 "bc20554b-0486-423a-ac42-c1a26ec52b0e" 00:41:34.992 ], 00:41:34.992 "product_name": "Malloc disk", 00:41:34.992 "block_size": 512, 00:41:34.992 "num_blocks": 65536, 00:41:34.992 "uuid": "bc20554b-0486-423a-ac42-c1a26ec52b0e", 00:41:34.992 "assigned_rate_limits": { 00:41:34.992 "rw_ios_per_sec": 0, 00:41:34.992 "rw_mbytes_per_sec": 0, 00:41:34.992 "r_mbytes_per_sec": 0, 00:41:34.992 "w_mbytes_per_sec": 0 00:41:34.992 }, 00:41:34.992 "claimed": false, 00:41:34.992 "zoned": false, 00:41:34.992 "supported_io_types": { 00:41:34.992 "read": true, 00:41:34.992 "write": true, 00:41:34.992 "unmap": true, 00:41:34.992 "flush": true, 00:41:34.992 "reset": true, 00:41:34.992 "nvme_admin": false, 00:41:34.992 "nvme_io": false, 00:41:34.992 "nvme_io_md": false, 00:41:34.992 "write_zeroes": true, 00:41:34.992 "zcopy": true, 00:41:34.992 "get_zone_info": false, 00:41:34.992 "zone_management": false, 00:41:34.992 "zone_append": false, 00:41:34.992 "compare": false, 00:41:34.992 "compare_and_write": false, 00:41:34.992 "abort": true, 00:41:34.992 "seek_hole": false, 00:41:34.992 "seek_data": false, 00:41:34.992 "copy": true, 00:41:34.992 "nvme_iov_md": false 00:41:34.992 }, 00:41:34.992 "memory_domains": [ 00:41:34.992 { 00:41:34.992 "dma_device_id": "system", 00:41:34.992 "dma_device_type": 1 00:41:34.992 }, 00:41:34.992 { 00:41:34.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:34.992 "dma_device_type": 2 00:41:34.992 } 00:41:34.992 ], 00:41:34.992 "driver_specific": {} 00:41:34.992 } 00:41:34.992 ] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.992 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:35.252 BaseBdev4 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:35.252 [ 00:41:35.252 { 00:41:35.252 "name": "BaseBdev4", 00:41:35.252 "aliases": [ 00:41:35.252 "ba968d7b-b084-46af-ae92-6cf6509689f5" 00:41:35.252 ], 00:41:35.252 "product_name": "Malloc disk", 00:41:35.252 "block_size": 512, 00:41:35.252 "num_blocks": 65536, 00:41:35.252 "uuid": "ba968d7b-b084-46af-ae92-6cf6509689f5", 00:41:35.252 "assigned_rate_limits": { 00:41:35.252 "rw_ios_per_sec": 0, 00:41:35.252 "rw_mbytes_per_sec": 0, 00:41:35.252 "r_mbytes_per_sec": 0, 00:41:35.252 "w_mbytes_per_sec": 0 00:41:35.252 }, 00:41:35.252 "claimed": false, 00:41:35.252 "zoned": false, 00:41:35.252 "supported_io_types": { 00:41:35.252 "read": true, 00:41:35.252 "write": true, 00:41:35.252 "unmap": true, 00:41:35.252 "flush": true, 00:41:35.252 "reset": true, 00:41:35.252 "nvme_admin": false, 00:41:35.252 "nvme_io": false, 00:41:35.252 "nvme_io_md": false, 00:41:35.252 "write_zeroes": true, 00:41:35.252 "zcopy": true, 00:41:35.252 "get_zone_info": false, 00:41:35.252 "zone_management": false, 00:41:35.252 "zone_append": false, 00:41:35.252 "compare": false, 00:41:35.252 "compare_and_write": false, 00:41:35.252 "abort": true, 00:41:35.252 "seek_hole": false, 00:41:35.252 "seek_data": false, 00:41:35.252 "copy": true, 00:41:35.252 "nvme_iov_md": false 00:41:35.252 }, 00:41:35.252 "memory_domains": [ 00:41:35.252 { 00:41:35.252 "dma_device_id": "system", 00:41:35.252 "dma_device_type": 1 00:41:35.252 }, 00:41:35.252 { 00:41:35.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:35.252 "dma_device_type": 2 00:41:35.252 } 00:41:35.252 ], 00:41:35.252 "driver_specific": {} 00:41:35.252 } 00:41:35.252 ] 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:35.252 [2024-11-26 17:38:35.773349] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:41:35.252 [2024-11-26 17:38:35.773405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:41:35.252 [2024-11-26 17:38:35.773430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:35.252 [2024-11-26 17:38:35.775604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:35.252 [2024-11-26 17:38:35.775658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:35.252 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:35.253 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:35.253 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:35.253 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.253 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:35.253 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.253 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:35.253 "name": "Existed_Raid", 00:41:35.253 "uuid": "390d4583-b716-4bf7-87b7-240f607901ba", 00:41:35.253 "strip_size_kb": 64, 00:41:35.253 "state": "configuring", 00:41:35.253 "raid_level": "raid5f", 00:41:35.253 "superblock": true, 00:41:35.253 "num_base_bdevs": 4, 00:41:35.253 "num_base_bdevs_discovered": 3, 00:41:35.253 "num_base_bdevs_operational": 4, 00:41:35.253 "base_bdevs_list": [ 00:41:35.253 { 00:41:35.253 "name": "BaseBdev1", 00:41:35.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:35.253 "is_configured": false, 00:41:35.253 "data_offset": 0, 00:41:35.253 "data_size": 0 00:41:35.253 }, 00:41:35.253 { 00:41:35.253 "name": "BaseBdev2", 00:41:35.253 "uuid": "c9b14856-fd96-4659-a893-ecc329b143dd", 00:41:35.253 "is_configured": true, 00:41:35.253 "data_offset": 2048, 00:41:35.253 "data_size": 63488 00:41:35.253 }, 00:41:35.253 { 00:41:35.253 "name": "BaseBdev3", 00:41:35.253 "uuid": "bc20554b-0486-423a-ac42-c1a26ec52b0e", 00:41:35.253 "is_configured": true, 00:41:35.253 "data_offset": 2048, 00:41:35.253 "data_size": 63488 00:41:35.253 }, 00:41:35.253 { 00:41:35.253 "name": "BaseBdev4", 00:41:35.253 "uuid": "ba968d7b-b084-46af-ae92-6cf6509689f5", 00:41:35.253 "is_configured": true, 00:41:35.253 "data_offset": 2048, 00:41:35.253 "data_size": 63488 00:41:35.253 } 00:41:35.253 ] 00:41:35.253 }' 00:41:35.253 17:38:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:35.253 17:38:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:35.836 [2024-11-26 17:38:36.252681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:35.836 "name": "Existed_Raid", 00:41:35.836 "uuid": "390d4583-b716-4bf7-87b7-240f607901ba", 00:41:35.836 "strip_size_kb": 64, 00:41:35.836 "state": "configuring", 00:41:35.836 "raid_level": "raid5f", 00:41:35.836 "superblock": true, 00:41:35.836 "num_base_bdevs": 4, 00:41:35.836 "num_base_bdevs_discovered": 2, 00:41:35.836 "num_base_bdevs_operational": 4, 00:41:35.836 "base_bdevs_list": [ 00:41:35.836 { 00:41:35.836 "name": "BaseBdev1", 00:41:35.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:35.836 "is_configured": false, 00:41:35.836 "data_offset": 0, 00:41:35.836 "data_size": 0 00:41:35.836 }, 00:41:35.836 { 00:41:35.836 "name": null, 00:41:35.836 "uuid": "c9b14856-fd96-4659-a893-ecc329b143dd", 00:41:35.836 "is_configured": false, 00:41:35.836 "data_offset": 0, 00:41:35.836 "data_size": 63488 00:41:35.836 }, 00:41:35.836 { 00:41:35.836 "name": "BaseBdev3", 00:41:35.836 "uuid": "bc20554b-0486-423a-ac42-c1a26ec52b0e", 00:41:35.836 "is_configured": true, 00:41:35.836 "data_offset": 2048, 00:41:35.836 "data_size": 63488 00:41:35.836 }, 00:41:35.836 { 00:41:35.836 "name": "BaseBdev4", 00:41:35.836 "uuid": "ba968d7b-b084-46af-ae92-6cf6509689f5", 00:41:35.836 "is_configured": true, 00:41:35.836 "data_offset": 2048, 00:41:35.836 "data_size": 63488 00:41:35.836 } 00:41:35.836 ] 00:41:35.836 }' 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:35.836 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.100 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:36.100 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:41:36.100 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.100 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.100 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.100 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:41:36.100 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:41:36.100 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.100 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.366 [2024-11-26 17:38:36.804406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:36.366 BaseBdev1 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.366 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.366 [ 00:41:36.366 { 00:41:36.366 "name": "BaseBdev1", 00:41:36.366 "aliases": [ 00:41:36.366 "d646acc3-18cf-4d52-afa8-e6ea780ee013" 00:41:36.367 ], 00:41:36.367 "product_name": "Malloc disk", 00:41:36.367 "block_size": 512, 00:41:36.367 "num_blocks": 65536, 00:41:36.367 "uuid": "d646acc3-18cf-4d52-afa8-e6ea780ee013", 00:41:36.367 "assigned_rate_limits": { 00:41:36.367 "rw_ios_per_sec": 0, 00:41:36.367 "rw_mbytes_per_sec": 0, 00:41:36.367 "r_mbytes_per_sec": 0, 00:41:36.367 "w_mbytes_per_sec": 0 00:41:36.367 }, 00:41:36.367 "claimed": true, 00:41:36.367 "claim_type": "exclusive_write", 00:41:36.367 "zoned": false, 00:41:36.367 "supported_io_types": { 00:41:36.367 "read": true, 00:41:36.367 "write": true, 00:41:36.367 "unmap": true, 00:41:36.367 "flush": true, 00:41:36.367 "reset": true, 00:41:36.367 "nvme_admin": false, 00:41:36.367 "nvme_io": false, 00:41:36.367 "nvme_io_md": false, 00:41:36.367 "write_zeroes": true, 00:41:36.367 "zcopy": true, 00:41:36.367 "get_zone_info": false, 00:41:36.367 "zone_management": false, 00:41:36.367 "zone_append": false, 00:41:36.367 "compare": false, 00:41:36.367 "compare_and_write": false, 00:41:36.367 "abort": true, 00:41:36.367 "seek_hole": false, 00:41:36.367 "seek_data": false, 00:41:36.367 "copy": true, 00:41:36.367 "nvme_iov_md": false 00:41:36.367 }, 00:41:36.367 "memory_domains": [ 00:41:36.367 { 00:41:36.367 "dma_device_id": "system", 00:41:36.367 "dma_device_type": 1 00:41:36.367 }, 00:41:36.367 { 00:41:36.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:36.367 "dma_device_type": 2 00:41:36.367 } 00:41:36.367 ], 00:41:36.367 "driver_specific": {} 00:41:36.367 } 00:41:36.367 ] 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:36.367 "name": "Existed_Raid", 00:41:36.367 "uuid": "390d4583-b716-4bf7-87b7-240f607901ba", 00:41:36.367 "strip_size_kb": 64, 00:41:36.367 "state": "configuring", 00:41:36.367 "raid_level": "raid5f", 00:41:36.367 "superblock": true, 00:41:36.367 "num_base_bdevs": 4, 00:41:36.367 "num_base_bdevs_discovered": 3, 00:41:36.367 "num_base_bdevs_operational": 4, 00:41:36.367 "base_bdevs_list": [ 00:41:36.367 { 00:41:36.367 "name": "BaseBdev1", 00:41:36.367 "uuid": "d646acc3-18cf-4d52-afa8-e6ea780ee013", 00:41:36.367 "is_configured": true, 00:41:36.367 "data_offset": 2048, 00:41:36.367 "data_size": 63488 00:41:36.367 }, 00:41:36.367 { 00:41:36.367 "name": null, 00:41:36.367 "uuid": "c9b14856-fd96-4659-a893-ecc329b143dd", 00:41:36.367 "is_configured": false, 00:41:36.367 "data_offset": 0, 00:41:36.367 "data_size": 63488 00:41:36.367 }, 00:41:36.367 { 00:41:36.367 "name": "BaseBdev3", 00:41:36.367 "uuid": "bc20554b-0486-423a-ac42-c1a26ec52b0e", 00:41:36.367 "is_configured": true, 00:41:36.367 "data_offset": 2048, 00:41:36.367 "data_size": 63488 00:41:36.367 }, 00:41:36.367 { 00:41:36.367 "name": "BaseBdev4", 00:41:36.367 "uuid": "ba968d7b-b084-46af-ae92-6cf6509689f5", 00:41:36.367 "is_configured": true, 00:41:36.367 "data_offset": 2048, 00:41:36.367 "data_size": 63488 00:41:36.367 } 00:41:36.367 ] 00:41:36.367 }' 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:36.367 17:38:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.626 [2024-11-26 17:38:37.311684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:36.626 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:36.884 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:36.884 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:36.884 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.884 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:36.884 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.884 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:36.884 "name": "Existed_Raid", 00:41:36.884 "uuid": "390d4583-b716-4bf7-87b7-240f607901ba", 00:41:36.884 "strip_size_kb": 64, 00:41:36.884 "state": "configuring", 00:41:36.884 "raid_level": "raid5f", 00:41:36.884 "superblock": true, 00:41:36.884 "num_base_bdevs": 4, 00:41:36.884 "num_base_bdevs_discovered": 2, 00:41:36.884 "num_base_bdevs_operational": 4, 00:41:36.884 "base_bdevs_list": [ 00:41:36.884 { 00:41:36.884 "name": "BaseBdev1", 00:41:36.884 "uuid": "d646acc3-18cf-4d52-afa8-e6ea780ee013", 00:41:36.884 "is_configured": true, 00:41:36.884 "data_offset": 2048, 00:41:36.884 "data_size": 63488 00:41:36.884 }, 00:41:36.884 { 00:41:36.884 "name": null, 00:41:36.884 "uuid": "c9b14856-fd96-4659-a893-ecc329b143dd", 00:41:36.884 "is_configured": false, 00:41:36.884 "data_offset": 0, 00:41:36.884 "data_size": 63488 00:41:36.884 }, 00:41:36.884 { 00:41:36.884 "name": null, 00:41:36.884 "uuid": "bc20554b-0486-423a-ac42-c1a26ec52b0e", 00:41:36.884 "is_configured": false, 00:41:36.884 "data_offset": 0, 00:41:36.884 "data_size": 63488 00:41:36.884 }, 00:41:36.884 { 00:41:36.884 "name": "BaseBdev4", 00:41:36.884 "uuid": "ba968d7b-b084-46af-ae92-6cf6509689f5", 00:41:36.884 "is_configured": true, 00:41:36.884 "data_offset": 2048, 00:41:36.884 "data_size": 63488 00:41:36.884 } 00:41:36.884 ] 00:41:36.884 }' 00:41:36.884 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:36.884 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:37.143 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:37.143 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:41:37.143 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.143 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:37.143 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:37.402 [2024-11-26 17:38:37.846743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:37.402 "name": "Existed_Raid", 00:41:37.402 "uuid": "390d4583-b716-4bf7-87b7-240f607901ba", 00:41:37.402 "strip_size_kb": 64, 00:41:37.402 "state": "configuring", 00:41:37.402 "raid_level": "raid5f", 00:41:37.402 "superblock": true, 00:41:37.402 "num_base_bdevs": 4, 00:41:37.402 "num_base_bdevs_discovered": 3, 00:41:37.402 "num_base_bdevs_operational": 4, 00:41:37.402 "base_bdevs_list": [ 00:41:37.402 { 00:41:37.402 "name": "BaseBdev1", 00:41:37.402 "uuid": "d646acc3-18cf-4d52-afa8-e6ea780ee013", 00:41:37.402 "is_configured": true, 00:41:37.402 "data_offset": 2048, 00:41:37.402 "data_size": 63488 00:41:37.402 }, 00:41:37.402 { 00:41:37.402 "name": null, 00:41:37.402 "uuid": "c9b14856-fd96-4659-a893-ecc329b143dd", 00:41:37.402 "is_configured": false, 00:41:37.402 "data_offset": 0, 00:41:37.402 "data_size": 63488 00:41:37.402 }, 00:41:37.402 { 00:41:37.402 "name": "BaseBdev3", 00:41:37.402 "uuid": "bc20554b-0486-423a-ac42-c1a26ec52b0e", 00:41:37.402 "is_configured": true, 00:41:37.402 "data_offset": 2048, 00:41:37.402 "data_size": 63488 00:41:37.402 }, 00:41:37.402 { 00:41:37.402 "name": "BaseBdev4", 00:41:37.402 "uuid": "ba968d7b-b084-46af-ae92-6cf6509689f5", 00:41:37.402 "is_configured": true, 00:41:37.402 "data_offset": 2048, 00:41:37.402 "data_size": 63488 00:41:37.402 } 00:41:37.402 ] 00:41:37.402 }' 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:37.402 17:38:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:37.660 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:37.660 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:41:37.660 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.660 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:37.660 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:37.919 [2024-11-26 17:38:38.381914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:37.919 "name": "Existed_Raid", 00:41:37.919 "uuid": "390d4583-b716-4bf7-87b7-240f607901ba", 00:41:37.919 "strip_size_kb": 64, 00:41:37.919 "state": "configuring", 00:41:37.919 "raid_level": "raid5f", 00:41:37.919 "superblock": true, 00:41:37.919 "num_base_bdevs": 4, 00:41:37.919 "num_base_bdevs_discovered": 2, 00:41:37.919 "num_base_bdevs_operational": 4, 00:41:37.919 "base_bdevs_list": [ 00:41:37.919 { 00:41:37.919 "name": null, 00:41:37.919 "uuid": "d646acc3-18cf-4d52-afa8-e6ea780ee013", 00:41:37.919 "is_configured": false, 00:41:37.919 "data_offset": 0, 00:41:37.919 "data_size": 63488 00:41:37.919 }, 00:41:37.919 { 00:41:37.919 "name": null, 00:41:37.919 "uuid": "c9b14856-fd96-4659-a893-ecc329b143dd", 00:41:37.919 "is_configured": false, 00:41:37.919 "data_offset": 0, 00:41:37.919 "data_size": 63488 00:41:37.919 }, 00:41:37.919 { 00:41:37.919 "name": "BaseBdev3", 00:41:37.919 "uuid": "bc20554b-0486-423a-ac42-c1a26ec52b0e", 00:41:37.919 "is_configured": true, 00:41:37.919 "data_offset": 2048, 00:41:37.919 "data_size": 63488 00:41:37.919 }, 00:41:37.919 { 00:41:37.919 "name": "BaseBdev4", 00:41:37.919 "uuid": "ba968d7b-b084-46af-ae92-6cf6509689f5", 00:41:37.919 "is_configured": true, 00:41:37.919 "data_offset": 2048, 00:41:37.919 "data_size": 63488 00:41:37.919 } 00:41:37.919 ] 00:41:37.919 }' 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:37.919 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:38.487 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:38.487 17:38:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:41:38.487 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.487 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:38.487 17:38:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:38.487 [2024-11-26 17:38:39.009204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:38.487 "name": "Existed_Raid", 00:41:38.487 "uuid": "390d4583-b716-4bf7-87b7-240f607901ba", 00:41:38.487 "strip_size_kb": 64, 00:41:38.487 "state": "configuring", 00:41:38.487 "raid_level": "raid5f", 00:41:38.487 "superblock": true, 00:41:38.487 "num_base_bdevs": 4, 00:41:38.487 "num_base_bdevs_discovered": 3, 00:41:38.487 "num_base_bdevs_operational": 4, 00:41:38.487 "base_bdevs_list": [ 00:41:38.487 { 00:41:38.487 "name": null, 00:41:38.487 "uuid": "d646acc3-18cf-4d52-afa8-e6ea780ee013", 00:41:38.487 "is_configured": false, 00:41:38.487 "data_offset": 0, 00:41:38.487 "data_size": 63488 00:41:38.487 }, 00:41:38.487 { 00:41:38.487 "name": "BaseBdev2", 00:41:38.487 "uuid": "c9b14856-fd96-4659-a893-ecc329b143dd", 00:41:38.487 "is_configured": true, 00:41:38.487 "data_offset": 2048, 00:41:38.487 "data_size": 63488 00:41:38.487 }, 00:41:38.487 { 00:41:38.487 "name": "BaseBdev3", 00:41:38.487 "uuid": "bc20554b-0486-423a-ac42-c1a26ec52b0e", 00:41:38.487 "is_configured": true, 00:41:38.487 "data_offset": 2048, 00:41:38.487 "data_size": 63488 00:41:38.487 }, 00:41:38.487 { 00:41:38.487 "name": "BaseBdev4", 00:41:38.487 "uuid": "ba968d7b-b084-46af-ae92-6cf6509689f5", 00:41:38.487 "is_configured": true, 00:41:38.487 "data_offset": 2048, 00:41:38.487 "data_size": 63488 00:41:38.487 } 00:41:38.487 ] 00:41:38.487 }' 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:38.487 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d646acc3-18cf-4d52-afa8-e6ea780ee013 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.054 [2024-11-26 17:38:39.640824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:41:39.054 [2024-11-26 17:38:39.641137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:41:39.054 [2024-11-26 17:38:39.641152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:39.054 [2024-11-26 17:38:39.641448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:41:39.054 NewBaseBdev 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.054 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.054 [2024-11-26 17:38:39.648736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:41:39.054 [2024-11-26 17:38:39.648814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:41:39.054 [2024-11-26 17:38:39.649025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.055 [ 00:41:39.055 { 00:41:39.055 "name": "NewBaseBdev", 00:41:39.055 "aliases": [ 00:41:39.055 "d646acc3-18cf-4d52-afa8-e6ea780ee013" 00:41:39.055 ], 00:41:39.055 "product_name": "Malloc disk", 00:41:39.055 "block_size": 512, 00:41:39.055 "num_blocks": 65536, 00:41:39.055 "uuid": "d646acc3-18cf-4d52-afa8-e6ea780ee013", 00:41:39.055 "assigned_rate_limits": { 00:41:39.055 "rw_ios_per_sec": 0, 00:41:39.055 "rw_mbytes_per_sec": 0, 00:41:39.055 "r_mbytes_per_sec": 0, 00:41:39.055 "w_mbytes_per_sec": 0 00:41:39.055 }, 00:41:39.055 "claimed": true, 00:41:39.055 "claim_type": "exclusive_write", 00:41:39.055 "zoned": false, 00:41:39.055 "supported_io_types": { 00:41:39.055 "read": true, 00:41:39.055 "write": true, 00:41:39.055 "unmap": true, 00:41:39.055 "flush": true, 00:41:39.055 "reset": true, 00:41:39.055 "nvme_admin": false, 00:41:39.055 "nvme_io": false, 00:41:39.055 "nvme_io_md": false, 00:41:39.055 "write_zeroes": true, 00:41:39.055 "zcopy": true, 00:41:39.055 "get_zone_info": false, 00:41:39.055 "zone_management": false, 00:41:39.055 "zone_append": false, 00:41:39.055 "compare": false, 00:41:39.055 "compare_and_write": false, 00:41:39.055 "abort": true, 00:41:39.055 "seek_hole": false, 00:41:39.055 "seek_data": false, 00:41:39.055 "copy": true, 00:41:39.055 "nvme_iov_md": false 00:41:39.055 }, 00:41:39.055 "memory_domains": [ 00:41:39.055 { 00:41:39.055 "dma_device_id": "system", 00:41:39.055 "dma_device_type": 1 00:41:39.055 }, 00:41:39.055 { 00:41:39.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:39.055 "dma_device_type": 2 00:41:39.055 } 00:41:39.055 ], 00:41:39.055 "driver_specific": {} 00:41:39.055 } 00:41:39.055 ] 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:39.055 "name": "Existed_Raid", 00:41:39.055 "uuid": "390d4583-b716-4bf7-87b7-240f607901ba", 00:41:39.055 "strip_size_kb": 64, 00:41:39.055 "state": "online", 00:41:39.055 "raid_level": "raid5f", 00:41:39.055 "superblock": true, 00:41:39.055 "num_base_bdevs": 4, 00:41:39.055 "num_base_bdevs_discovered": 4, 00:41:39.055 "num_base_bdevs_operational": 4, 00:41:39.055 "base_bdevs_list": [ 00:41:39.055 { 00:41:39.055 "name": "NewBaseBdev", 00:41:39.055 "uuid": "d646acc3-18cf-4d52-afa8-e6ea780ee013", 00:41:39.055 "is_configured": true, 00:41:39.055 "data_offset": 2048, 00:41:39.055 "data_size": 63488 00:41:39.055 }, 00:41:39.055 { 00:41:39.055 "name": "BaseBdev2", 00:41:39.055 "uuid": "c9b14856-fd96-4659-a893-ecc329b143dd", 00:41:39.055 "is_configured": true, 00:41:39.055 "data_offset": 2048, 00:41:39.055 "data_size": 63488 00:41:39.055 }, 00:41:39.055 { 00:41:39.055 "name": "BaseBdev3", 00:41:39.055 "uuid": "bc20554b-0486-423a-ac42-c1a26ec52b0e", 00:41:39.055 "is_configured": true, 00:41:39.055 "data_offset": 2048, 00:41:39.055 "data_size": 63488 00:41:39.055 }, 00:41:39.055 { 00:41:39.055 "name": "BaseBdev4", 00:41:39.055 "uuid": "ba968d7b-b084-46af-ae92-6cf6509689f5", 00:41:39.055 "is_configured": true, 00:41:39.055 "data_offset": 2048, 00:41:39.055 "data_size": 63488 00:41:39.055 } 00:41:39.055 ] 00:41:39.055 }' 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:39.055 17:38:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.623 [2024-11-26 17:38:40.182196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.623 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:39.623 "name": "Existed_Raid", 00:41:39.623 "aliases": [ 00:41:39.623 "390d4583-b716-4bf7-87b7-240f607901ba" 00:41:39.623 ], 00:41:39.623 "product_name": "Raid Volume", 00:41:39.623 "block_size": 512, 00:41:39.623 "num_blocks": 190464, 00:41:39.623 "uuid": "390d4583-b716-4bf7-87b7-240f607901ba", 00:41:39.623 "assigned_rate_limits": { 00:41:39.623 "rw_ios_per_sec": 0, 00:41:39.623 "rw_mbytes_per_sec": 0, 00:41:39.623 "r_mbytes_per_sec": 0, 00:41:39.623 "w_mbytes_per_sec": 0 00:41:39.623 }, 00:41:39.623 "claimed": false, 00:41:39.623 "zoned": false, 00:41:39.623 "supported_io_types": { 00:41:39.623 "read": true, 00:41:39.623 "write": true, 00:41:39.624 "unmap": false, 00:41:39.624 "flush": false, 00:41:39.624 "reset": true, 00:41:39.624 "nvme_admin": false, 00:41:39.624 "nvme_io": false, 00:41:39.624 "nvme_io_md": false, 00:41:39.624 "write_zeroes": true, 00:41:39.624 "zcopy": false, 00:41:39.624 "get_zone_info": false, 00:41:39.624 "zone_management": false, 00:41:39.624 "zone_append": false, 00:41:39.624 "compare": false, 00:41:39.624 "compare_and_write": false, 00:41:39.624 "abort": false, 00:41:39.624 "seek_hole": false, 00:41:39.624 "seek_data": false, 00:41:39.624 "copy": false, 00:41:39.624 "nvme_iov_md": false 00:41:39.624 }, 00:41:39.624 "driver_specific": { 00:41:39.624 "raid": { 00:41:39.624 "uuid": "390d4583-b716-4bf7-87b7-240f607901ba", 00:41:39.624 "strip_size_kb": 64, 00:41:39.624 "state": "online", 00:41:39.624 "raid_level": "raid5f", 00:41:39.624 "superblock": true, 00:41:39.624 "num_base_bdevs": 4, 00:41:39.624 "num_base_bdevs_discovered": 4, 00:41:39.624 "num_base_bdevs_operational": 4, 00:41:39.624 "base_bdevs_list": [ 00:41:39.624 { 00:41:39.624 "name": "NewBaseBdev", 00:41:39.624 "uuid": "d646acc3-18cf-4d52-afa8-e6ea780ee013", 00:41:39.624 "is_configured": true, 00:41:39.624 "data_offset": 2048, 00:41:39.624 "data_size": 63488 00:41:39.624 }, 00:41:39.624 { 00:41:39.624 "name": "BaseBdev2", 00:41:39.624 "uuid": "c9b14856-fd96-4659-a893-ecc329b143dd", 00:41:39.624 "is_configured": true, 00:41:39.624 "data_offset": 2048, 00:41:39.624 "data_size": 63488 00:41:39.624 }, 00:41:39.624 { 00:41:39.624 "name": "BaseBdev3", 00:41:39.624 "uuid": "bc20554b-0486-423a-ac42-c1a26ec52b0e", 00:41:39.624 "is_configured": true, 00:41:39.624 "data_offset": 2048, 00:41:39.624 "data_size": 63488 00:41:39.624 }, 00:41:39.624 { 00:41:39.624 "name": "BaseBdev4", 00:41:39.624 "uuid": "ba968d7b-b084-46af-ae92-6cf6509689f5", 00:41:39.624 "is_configured": true, 00:41:39.624 "data_offset": 2048, 00:41:39.624 "data_size": 63488 00:41:39.624 } 00:41:39.624 ] 00:41:39.624 } 00:41:39.624 } 00:41:39.624 }' 00:41:39.624 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:39.624 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:41:39.624 BaseBdev2 00:41:39.624 BaseBdev3 00:41:39.624 BaseBdev4' 00:41:39.624 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:39.624 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:41:39.624 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:39.624 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:39.624 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:41:39.624 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.624 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:39.884 [2024-11-26 17:38:40.521439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:41:39.884 [2024-11-26 17:38:40.521483] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:39.884 [2024-11-26 17:38:40.521649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:39.884 [2024-11-26 17:38:40.521993] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:39.884 [2024-11-26 17:38:40.522006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83736 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83736 ']' 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83736 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:39.884 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83736 00:41:39.885 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:39.885 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:39.885 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83736' 00:41:39.885 killing process with pid 83736 00:41:39.885 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83736 00:41:39.885 [2024-11-26 17:38:40.563215] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:39.885 17:38:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83736 00:41:40.454 [2024-11-26 17:38:41.005746] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:41.831 17:38:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:41:41.831 00:41:41.831 real 0m12.307s 00:41:41.831 user 0m19.161s 00:41:41.831 sys 0m2.453s 00:41:41.831 17:38:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:41.831 17:38:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:41:41.831 ************************************ 00:41:41.831 END TEST raid5f_state_function_test_sb 00:41:41.831 ************************************ 00:41:41.831 17:38:42 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:41:41.831 17:38:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:41.831 17:38:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:41.831 17:38:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:41.831 ************************************ 00:41:41.831 START TEST raid5f_superblock_test 00:41:41.831 ************************************ 00:41:41.831 17:38:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:41:41.831 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:41:41.831 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:41:41.831 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:41:41.831 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:41:41.831 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84412 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84412 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84412 ']' 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:41.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:41.832 17:38:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:41.832 [2024-11-26 17:38:42.446156] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:41:41.832 [2024-11-26 17:38:42.446388] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84412 ] 00:41:42.090 [2024-11-26 17:38:42.623806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.090 [2024-11-26 17:38:42.767855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:42.350 [2024-11-26 17:38:43.018793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:42.350 [2024-11-26 17:38:43.018913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.608 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:42.866 malloc1 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:42.866 [2024-11-26 17:38:43.351585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:42.866 [2024-11-26 17:38:43.351659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:42.866 [2024-11-26 17:38:43.351689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:41:42.866 [2024-11-26 17:38:43.351699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:42.866 [2024-11-26 17:38:43.354233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:42.866 [2024-11-26 17:38:43.354359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:42.866 pt1 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:42.866 malloc2 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:42.866 [2024-11-26 17:38:43.414317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:42.866 [2024-11-26 17:38:43.414477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:42.866 [2024-11-26 17:38:43.414536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:41:42.866 [2024-11-26 17:38:43.414572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:42.866 [2024-11-26 17:38:43.417291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:42.866 [2024-11-26 17:38:43.417381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:42.866 pt2 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:41:42.866 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:42.867 malloc3 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:42.867 [2024-11-26 17:38:43.487767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:41:42.867 [2024-11-26 17:38:43.487947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:42.867 [2024-11-26 17:38:43.487994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:41:42.867 [2024-11-26 17:38:43.488039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:42.867 [2024-11-26 17:38:43.490739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:42.867 [2024-11-26 17:38:43.490826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:41:42.867 pt3 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:42.867 malloc4 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:42.867 [2024-11-26 17:38:43.551393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:41:42.867 [2024-11-26 17:38:43.551472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:42.867 [2024-11-26 17:38:43.551497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:41:42.867 [2024-11-26 17:38:43.551507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:42.867 [2024-11-26 17:38:43.554118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:42.867 [2024-11-26 17:38:43.554222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:41:42.867 pt4 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.867 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.125 [2024-11-26 17:38:43.563412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:43.125 [2024-11-26 17:38:43.565711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:43.125 [2024-11-26 17:38:43.565804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:41:43.125 [2024-11-26 17:38:43.565852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:41:43.125 [2024-11-26 17:38:43.566053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:41:43.125 [2024-11-26 17:38:43.566070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:43.125 [2024-11-26 17:38:43.566372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:41:43.125 [2024-11-26 17:38:43.574104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:41:43.126 [2024-11-26 17:38:43.574170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:41:43.126 [2024-11-26 17:38:43.574440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:43.126 "name": "raid_bdev1", 00:41:43.126 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:43.126 "strip_size_kb": 64, 00:41:43.126 "state": "online", 00:41:43.126 "raid_level": "raid5f", 00:41:43.126 "superblock": true, 00:41:43.126 "num_base_bdevs": 4, 00:41:43.126 "num_base_bdevs_discovered": 4, 00:41:43.126 "num_base_bdevs_operational": 4, 00:41:43.126 "base_bdevs_list": [ 00:41:43.126 { 00:41:43.126 "name": "pt1", 00:41:43.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:43.126 "is_configured": true, 00:41:43.126 "data_offset": 2048, 00:41:43.126 "data_size": 63488 00:41:43.126 }, 00:41:43.126 { 00:41:43.126 "name": "pt2", 00:41:43.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:43.126 "is_configured": true, 00:41:43.126 "data_offset": 2048, 00:41:43.126 "data_size": 63488 00:41:43.126 }, 00:41:43.126 { 00:41:43.126 "name": "pt3", 00:41:43.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:43.126 "is_configured": true, 00:41:43.126 "data_offset": 2048, 00:41:43.126 "data_size": 63488 00:41:43.126 }, 00:41:43.126 { 00:41:43.126 "name": "pt4", 00:41:43.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:43.126 "is_configured": true, 00:41:43.126 "data_offset": 2048, 00:41:43.126 "data_size": 63488 00:41:43.126 } 00:41:43.126 ] 00:41:43.126 }' 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:43.126 17:38:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.385 [2024-11-26 17:38:44.056050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:43.385 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.643 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:43.643 "name": "raid_bdev1", 00:41:43.643 "aliases": [ 00:41:43.643 "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7" 00:41:43.643 ], 00:41:43.643 "product_name": "Raid Volume", 00:41:43.643 "block_size": 512, 00:41:43.643 "num_blocks": 190464, 00:41:43.643 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:43.643 "assigned_rate_limits": { 00:41:43.643 "rw_ios_per_sec": 0, 00:41:43.643 "rw_mbytes_per_sec": 0, 00:41:43.643 "r_mbytes_per_sec": 0, 00:41:43.643 "w_mbytes_per_sec": 0 00:41:43.643 }, 00:41:43.643 "claimed": false, 00:41:43.643 "zoned": false, 00:41:43.643 "supported_io_types": { 00:41:43.643 "read": true, 00:41:43.643 "write": true, 00:41:43.643 "unmap": false, 00:41:43.643 "flush": false, 00:41:43.643 "reset": true, 00:41:43.643 "nvme_admin": false, 00:41:43.643 "nvme_io": false, 00:41:43.643 "nvme_io_md": false, 00:41:43.643 "write_zeroes": true, 00:41:43.643 "zcopy": false, 00:41:43.643 "get_zone_info": false, 00:41:43.643 "zone_management": false, 00:41:43.643 "zone_append": false, 00:41:43.643 "compare": false, 00:41:43.643 "compare_and_write": false, 00:41:43.643 "abort": false, 00:41:43.643 "seek_hole": false, 00:41:43.643 "seek_data": false, 00:41:43.643 "copy": false, 00:41:43.643 "nvme_iov_md": false 00:41:43.643 }, 00:41:43.643 "driver_specific": { 00:41:43.644 "raid": { 00:41:43.644 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:43.644 "strip_size_kb": 64, 00:41:43.644 "state": "online", 00:41:43.644 "raid_level": "raid5f", 00:41:43.644 "superblock": true, 00:41:43.644 "num_base_bdevs": 4, 00:41:43.644 "num_base_bdevs_discovered": 4, 00:41:43.644 "num_base_bdevs_operational": 4, 00:41:43.644 "base_bdevs_list": [ 00:41:43.644 { 00:41:43.644 "name": "pt1", 00:41:43.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:43.644 "is_configured": true, 00:41:43.644 "data_offset": 2048, 00:41:43.644 "data_size": 63488 00:41:43.644 }, 00:41:43.644 { 00:41:43.644 "name": "pt2", 00:41:43.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:43.644 "is_configured": true, 00:41:43.644 "data_offset": 2048, 00:41:43.644 "data_size": 63488 00:41:43.644 }, 00:41:43.644 { 00:41:43.644 "name": "pt3", 00:41:43.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:43.644 "is_configured": true, 00:41:43.644 "data_offset": 2048, 00:41:43.644 "data_size": 63488 00:41:43.644 }, 00:41:43.644 { 00:41:43.644 "name": "pt4", 00:41:43.644 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:43.644 "is_configured": true, 00:41:43.644 "data_offset": 2048, 00:41:43.644 "data_size": 63488 00:41:43.644 } 00:41:43.644 ] 00:41:43.644 } 00:41:43.644 } 00:41:43.644 }' 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:41:43.644 pt2 00:41:43.644 pt3 00:41:43.644 pt4' 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.644 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.903 [2024-11-26 17:38:44.415399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dda660b7-2fcc-4c14-b2ee-2835e2fe12a7 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dda660b7-2fcc-4c14-b2ee-2835e2fe12a7 ']' 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.903 [2024-11-26 17:38:44.459107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:43.903 [2024-11-26 17:38:44.459141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:43.903 [2024-11-26 17:38:44.459256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:43.903 [2024-11-26 17:38:44.459358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:43.903 [2024-11-26 17:38:44.459375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:43.903 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.162 [2024-11-26 17:38:44.626833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:41:44.162 [2024-11-26 17:38:44.629061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:41:44.162 [2024-11-26 17:38:44.629115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:41:44.162 [2024-11-26 17:38:44.629150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:41:44.162 [2024-11-26 17:38:44.629206] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:41:44.162 [2024-11-26 17:38:44.629264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:41:44.162 [2024-11-26 17:38:44.629283] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:41:44.162 [2024-11-26 17:38:44.629304] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:41:44.162 [2024-11-26 17:38:44.629318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:44.162 [2024-11-26 17:38:44.629330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:41:44.162 request: 00:41:44.162 { 00:41:44.162 "name": "raid_bdev1", 00:41:44.162 "raid_level": "raid5f", 00:41:44.162 "base_bdevs": [ 00:41:44.162 "malloc1", 00:41:44.162 "malloc2", 00:41:44.162 "malloc3", 00:41:44.162 "malloc4" 00:41:44.162 ], 00:41:44.162 "strip_size_kb": 64, 00:41:44.162 "superblock": false, 00:41:44.162 "method": "bdev_raid_create", 00:41:44.162 "req_id": 1 00:41:44.162 } 00:41:44.162 Got JSON-RPC error response 00:41:44.162 response: 00:41:44.162 { 00:41:44.162 "code": -17, 00:41:44.162 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:41:44.162 } 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:44.162 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.163 [2024-11-26 17:38:44.694721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:44.163 [2024-11-26 17:38:44.694878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:44.163 [2024-11-26 17:38:44.694916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:41:44.163 [2024-11-26 17:38:44.694948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:44.163 [2024-11-26 17:38:44.697769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:44.163 [2024-11-26 17:38:44.697861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:44.163 [2024-11-26 17:38:44.698000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:41:44.163 [2024-11-26 17:38:44.698101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:44.163 pt1 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:44.163 "name": "raid_bdev1", 00:41:44.163 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:44.163 "strip_size_kb": 64, 00:41:44.163 "state": "configuring", 00:41:44.163 "raid_level": "raid5f", 00:41:44.163 "superblock": true, 00:41:44.163 "num_base_bdevs": 4, 00:41:44.163 "num_base_bdevs_discovered": 1, 00:41:44.163 "num_base_bdevs_operational": 4, 00:41:44.163 "base_bdevs_list": [ 00:41:44.163 { 00:41:44.163 "name": "pt1", 00:41:44.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:44.163 "is_configured": true, 00:41:44.163 "data_offset": 2048, 00:41:44.163 "data_size": 63488 00:41:44.163 }, 00:41:44.163 { 00:41:44.163 "name": null, 00:41:44.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:44.163 "is_configured": false, 00:41:44.163 "data_offset": 2048, 00:41:44.163 "data_size": 63488 00:41:44.163 }, 00:41:44.163 { 00:41:44.163 "name": null, 00:41:44.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:44.163 "is_configured": false, 00:41:44.163 "data_offset": 2048, 00:41:44.163 "data_size": 63488 00:41:44.163 }, 00:41:44.163 { 00:41:44.163 "name": null, 00:41:44.163 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:44.163 "is_configured": false, 00:41:44.163 "data_offset": 2048, 00:41:44.163 "data_size": 63488 00:41:44.163 } 00:41:44.163 ] 00:41:44.163 }' 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:44.163 17:38:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.730 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:41:44.730 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:44.730 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.730 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.730 [2024-11-26 17:38:45.165946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:44.730 [2024-11-26 17:38:45.166060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:44.730 [2024-11-26 17:38:45.166086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:41:44.730 [2024-11-26 17:38:45.166099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:44.730 [2024-11-26 17:38:45.166686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:44.730 [2024-11-26 17:38:45.166713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:44.730 [2024-11-26 17:38:45.166833] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:41:44.730 [2024-11-26 17:38:45.166864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:44.730 pt2 00:41:44.730 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.730 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.731 [2024-11-26 17:38:45.177979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:44.731 "name": "raid_bdev1", 00:41:44.731 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:44.731 "strip_size_kb": 64, 00:41:44.731 "state": "configuring", 00:41:44.731 "raid_level": "raid5f", 00:41:44.731 "superblock": true, 00:41:44.731 "num_base_bdevs": 4, 00:41:44.731 "num_base_bdevs_discovered": 1, 00:41:44.731 "num_base_bdevs_operational": 4, 00:41:44.731 "base_bdevs_list": [ 00:41:44.731 { 00:41:44.731 "name": "pt1", 00:41:44.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:44.731 "is_configured": true, 00:41:44.731 "data_offset": 2048, 00:41:44.731 "data_size": 63488 00:41:44.731 }, 00:41:44.731 { 00:41:44.731 "name": null, 00:41:44.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:44.731 "is_configured": false, 00:41:44.731 "data_offset": 0, 00:41:44.731 "data_size": 63488 00:41:44.731 }, 00:41:44.731 { 00:41:44.731 "name": null, 00:41:44.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:44.731 "is_configured": false, 00:41:44.731 "data_offset": 2048, 00:41:44.731 "data_size": 63488 00:41:44.731 }, 00:41:44.731 { 00:41:44.731 "name": null, 00:41:44.731 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:44.731 "is_configured": false, 00:41:44.731 "data_offset": 2048, 00:41:44.731 "data_size": 63488 00:41:44.731 } 00:41:44.731 ] 00:41:44.731 }' 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:44.731 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.990 [2024-11-26 17:38:45.581236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:44.990 [2024-11-26 17:38:45.581424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:44.990 [2024-11-26 17:38:45.581477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:41:44.990 [2024-11-26 17:38:45.581517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:44.990 [2024-11-26 17:38:45.582138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:44.990 [2024-11-26 17:38:45.582210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:44.990 [2024-11-26 17:38:45.582356] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:41:44.990 [2024-11-26 17:38:45.582414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:44.990 pt2 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.990 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.990 [2024-11-26 17:38:45.593184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:41:44.990 [2024-11-26 17:38:45.593257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:44.990 [2024-11-26 17:38:45.593288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:41:44.990 [2024-11-26 17:38:45.593301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:44.990 [2024-11-26 17:38:45.593843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:44.990 [2024-11-26 17:38:45.593867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:41:44.990 [2024-11-26 17:38:45.593965] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:41:44.990 [2024-11-26 17:38:45.593997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:41:44.990 pt3 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.991 [2024-11-26 17:38:45.605134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:41:44.991 [2024-11-26 17:38:45.605271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:44.991 [2024-11-26 17:38:45.605329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:41:44.991 [2024-11-26 17:38:45.605358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:44.991 [2024-11-26 17:38:45.605963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:44.991 [2024-11-26 17:38:45.606047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:41:44.991 [2024-11-26 17:38:45.606180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:41:44.991 [2024-11-26 17:38:45.606239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:41:44.991 [2024-11-26 17:38:45.606440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:41:44.991 [2024-11-26 17:38:45.606480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:44.991 [2024-11-26 17:38:45.606782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:41:44.991 [2024-11-26 17:38:45.613913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:41:44.991 [2024-11-26 17:38:45.613983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:41:44.991 [2024-11-26 17:38:45.614253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:44.991 pt4 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:44.991 "name": "raid_bdev1", 00:41:44.991 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:44.991 "strip_size_kb": 64, 00:41:44.991 "state": "online", 00:41:44.991 "raid_level": "raid5f", 00:41:44.991 "superblock": true, 00:41:44.991 "num_base_bdevs": 4, 00:41:44.991 "num_base_bdevs_discovered": 4, 00:41:44.991 "num_base_bdevs_operational": 4, 00:41:44.991 "base_bdevs_list": [ 00:41:44.991 { 00:41:44.991 "name": "pt1", 00:41:44.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:44.991 "is_configured": true, 00:41:44.991 "data_offset": 2048, 00:41:44.991 "data_size": 63488 00:41:44.991 }, 00:41:44.991 { 00:41:44.991 "name": "pt2", 00:41:44.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:44.991 "is_configured": true, 00:41:44.991 "data_offset": 2048, 00:41:44.991 "data_size": 63488 00:41:44.991 }, 00:41:44.991 { 00:41:44.991 "name": "pt3", 00:41:44.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:44.991 "is_configured": true, 00:41:44.991 "data_offset": 2048, 00:41:44.991 "data_size": 63488 00:41:44.991 }, 00:41:44.991 { 00:41:44.991 "name": "pt4", 00:41:44.991 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:44.991 "is_configured": true, 00:41:44.991 "data_offset": 2048, 00:41:44.991 "data_size": 63488 00:41:44.991 } 00:41:44.991 ] 00:41:44.991 }' 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:44.991 17:38:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:45.558 [2024-11-26 17:38:46.103496] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:41:45.558 "name": "raid_bdev1", 00:41:45.558 "aliases": [ 00:41:45.558 "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7" 00:41:45.558 ], 00:41:45.558 "product_name": "Raid Volume", 00:41:45.558 "block_size": 512, 00:41:45.558 "num_blocks": 190464, 00:41:45.558 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:45.558 "assigned_rate_limits": { 00:41:45.558 "rw_ios_per_sec": 0, 00:41:45.558 "rw_mbytes_per_sec": 0, 00:41:45.558 "r_mbytes_per_sec": 0, 00:41:45.558 "w_mbytes_per_sec": 0 00:41:45.558 }, 00:41:45.558 "claimed": false, 00:41:45.558 "zoned": false, 00:41:45.558 "supported_io_types": { 00:41:45.558 "read": true, 00:41:45.558 "write": true, 00:41:45.558 "unmap": false, 00:41:45.558 "flush": false, 00:41:45.558 "reset": true, 00:41:45.558 "nvme_admin": false, 00:41:45.558 "nvme_io": false, 00:41:45.558 "nvme_io_md": false, 00:41:45.558 "write_zeroes": true, 00:41:45.558 "zcopy": false, 00:41:45.558 "get_zone_info": false, 00:41:45.558 "zone_management": false, 00:41:45.558 "zone_append": false, 00:41:45.558 "compare": false, 00:41:45.558 "compare_and_write": false, 00:41:45.558 "abort": false, 00:41:45.558 "seek_hole": false, 00:41:45.558 "seek_data": false, 00:41:45.558 "copy": false, 00:41:45.558 "nvme_iov_md": false 00:41:45.558 }, 00:41:45.558 "driver_specific": { 00:41:45.558 "raid": { 00:41:45.558 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:45.558 "strip_size_kb": 64, 00:41:45.558 "state": "online", 00:41:45.558 "raid_level": "raid5f", 00:41:45.558 "superblock": true, 00:41:45.558 "num_base_bdevs": 4, 00:41:45.558 "num_base_bdevs_discovered": 4, 00:41:45.558 "num_base_bdevs_operational": 4, 00:41:45.558 "base_bdevs_list": [ 00:41:45.558 { 00:41:45.558 "name": "pt1", 00:41:45.558 "uuid": "00000000-0000-0000-0000-000000000001", 00:41:45.558 "is_configured": true, 00:41:45.558 "data_offset": 2048, 00:41:45.558 "data_size": 63488 00:41:45.558 }, 00:41:45.558 { 00:41:45.558 "name": "pt2", 00:41:45.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:45.558 "is_configured": true, 00:41:45.558 "data_offset": 2048, 00:41:45.558 "data_size": 63488 00:41:45.558 }, 00:41:45.558 { 00:41:45.558 "name": "pt3", 00:41:45.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:45.558 "is_configured": true, 00:41:45.558 "data_offset": 2048, 00:41:45.558 "data_size": 63488 00:41:45.558 }, 00:41:45.558 { 00:41:45.558 "name": "pt4", 00:41:45.558 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:45.558 "is_configured": true, 00:41:45.558 "data_offset": 2048, 00:41:45.558 "data_size": 63488 00:41:45.558 } 00:41:45.558 ] 00:41:45.558 } 00:41:45.558 } 00:41:45.558 }' 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:41:45.558 pt2 00:41:45.558 pt3 00:41:45.558 pt4' 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.558 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:45.819 [2024-11-26 17:38:46.414978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dda660b7-2fcc-4c14-b2ee-2835e2fe12a7 '!=' dda660b7-2fcc-4c14-b2ee-2835e2fe12a7 ']' 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:45.819 [2024-11-26 17:38:46.458821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:45.819 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.080 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:46.080 "name": "raid_bdev1", 00:41:46.080 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:46.080 "strip_size_kb": 64, 00:41:46.080 "state": "online", 00:41:46.080 "raid_level": "raid5f", 00:41:46.080 "superblock": true, 00:41:46.080 "num_base_bdevs": 4, 00:41:46.080 "num_base_bdevs_discovered": 3, 00:41:46.080 "num_base_bdevs_operational": 3, 00:41:46.080 "base_bdevs_list": [ 00:41:46.080 { 00:41:46.080 "name": null, 00:41:46.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:46.080 "is_configured": false, 00:41:46.080 "data_offset": 0, 00:41:46.081 "data_size": 63488 00:41:46.081 }, 00:41:46.081 { 00:41:46.081 "name": "pt2", 00:41:46.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:46.081 "is_configured": true, 00:41:46.081 "data_offset": 2048, 00:41:46.081 "data_size": 63488 00:41:46.081 }, 00:41:46.081 { 00:41:46.081 "name": "pt3", 00:41:46.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:46.081 "is_configured": true, 00:41:46.081 "data_offset": 2048, 00:41:46.081 "data_size": 63488 00:41:46.081 }, 00:41:46.081 { 00:41:46.081 "name": "pt4", 00:41:46.081 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:46.081 "is_configured": true, 00:41:46.081 "data_offset": 2048, 00:41:46.081 "data_size": 63488 00:41:46.081 } 00:41:46.081 ] 00:41:46.081 }' 00:41:46.081 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:46.081 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.340 [2024-11-26 17:38:46.941844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:46.340 [2024-11-26 17:38:46.941894] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:46.340 [2024-11-26 17:38:46.942007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:46.340 [2024-11-26 17:38:46.942117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:46.340 [2024-11-26 17:38:46.942128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.340 17:38:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.340 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.598 [2024-11-26 17:38:47.033665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:41:46.598 [2024-11-26 17:38:47.033845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:46.598 [2024-11-26 17:38:47.033873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:41:46.599 [2024-11-26 17:38:47.033884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:46.599 [2024-11-26 17:38:47.036728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:46.599 [2024-11-26 17:38:47.036771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:41:46.599 [2024-11-26 17:38:47.036879] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:41:46.599 [2024-11-26 17:38:47.036932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:46.599 pt2 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:46.599 "name": "raid_bdev1", 00:41:46.599 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:46.599 "strip_size_kb": 64, 00:41:46.599 "state": "configuring", 00:41:46.599 "raid_level": "raid5f", 00:41:46.599 "superblock": true, 00:41:46.599 "num_base_bdevs": 4, 00:41:46.599 "num_base_bdevs_discovered": 1, 00:41:46.599 "num_base_bdevs_operational": 3, 00:41:46.599 "base_bdevs_list": [ 00:41:46.599 { 00:41:46.599 "name": null, 00:41:46.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:46.599 "is_configured": false, 00:41:46.599 "data_offset": 2048, 00:41:46.599 "data_size": 63488 00:41:46.599 }, 00:41:46.599 { 00:41:46.599 "name": "pt2", 00:41:46.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:46.599 "is_configured": true, 00:41:46.599 "data_offset": 2048, 00:41:46.599 "data_size": 63488 00:41:46.599 }, 00:41:46.599 { 00:41:46.599 "name": null, 00:41:46.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:46.599 "is_configured": false, 00:41:46.599 "data_offset": 2048, 00:41:46.599 "data_size": 63488 00:41:46.599 }, 00:41:46.599 { 00:41:46.599 "name": null, 00:41:46.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:46.599 "is_configured": false, 00:41:46.599 "data_offset": 2048, 00:41:46.599 "data_size": 63488 00:41:46.599 } 00:41:46.599 ] 00:41:46.599 }' 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:46.599 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.857 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:41:46.857 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:46.858 [2024-11-26 17:38:47.540839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:41:46.858 [2024-11-26 17:38:47.541047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:46.858 [2024-11-26 17:38:47.541116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:41:46.858 [2024-11-26 17:38:47.541154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:46.858 [2024-11-26 17:38:47.541780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:46.858 [2024-11-26 17:38:47.541845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:41:46.858 [2024-11-26 17:38:47.541992] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:41:46.858 [2024-11-26 17:38:47.542051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:41:46.858 pt3 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:46.858 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:47.116 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:47.116 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:47.116 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.116 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.116 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.116 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:47.116 "name": "raid_bdev1", 00:41:47.116 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:47.116 "strip_size_kb": 64, 00:41:47.116 "state": "configuring", 00:41:47.116 "raid_level": "raid5f", 00:41:47.116 "superblock": true, 00:41:47.116 "num_base_bdevs": 4, 00:41:47.116 "num_base_bdevs_discovered": 2, 00:41:47.116 "num_base_bdevs_operational": 3, 00:41:47.116 "base_bdevs_list": [ 00:41:47.116 { 00:41:47.116 "name": null, 00:41:47.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:47.116 "is_configured": false, 00:41:47.116 "data_offset": 2048, 00:41:47.116 "data_size": 63488 00:41:47.116 }, 00:41:47.116 { 00:41:47.116 "name": "pt2", 00:41:47.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:47.116 "is_configured": true, 00:41:47.116 "data_offset": 2048, 00:41:47.116 "data_size": 63488 00:41:47.116 }, 00:41:47.116 { 00:41:47.116 "name": "pt3", 00:41:47.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:47.116 "is_configured": true, 00:41:47.116 "data_offset": 2048, 00:41:47.116 "data_size": 63488 00:41:47.116 }, 00:41:47.116 { 00:41:47.116 "name": null, 00:41:47.116 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:47.116 "is_configured": false, 00:41:47.116 "data_offset": 2048, 00:41:47.116 "data_size": 63488 00:41:47.116 } 00:41:47.116 ] 00:41:47.116 }' 00:41:47.116 17:38:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:47.116 17:38:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.375 [2024-11-26 17:38:48.020138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:41:47.375 [2024-11-26 17:38:48.020344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:47.375 [2024-11-26 17:38:48.020394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:41:47.375 [2024-11-26 17:38:48.020426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:47.375 [2024-11-26 17:38:48.021065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:47.375 [2024-11-26 17:38:48.021144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:41:47.375 [2024-11-26 17:38:48.021293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:41:47.375 [2024-11-26 17:38:48.021359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:41:47.375 [2024-11-26 17:38:48.021568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:41:47.375 [2024-11-26 17:38:48.021610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:47.375 [2024-11-26 17:38:48.021917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:41:47.375 [2024-11-26 17:38:48.029313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:41:47.375 [2024-11-26 17:38:48.029386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:41:47.375 [2024-11-26 17:38:48.029859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:47.375 pt4 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.375 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.633 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:47.633 "name": "raid_bdev1", 00:41:47.633 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:47.633 "strip_size_kb": 64, 00:41:47.633 "state": "online", 00:41:47.633 "raid_level": "raid5f", 00:41:47.633 "superblock": true, 00:41:47.633 "num_base_bdevs": 4, 00:41:47.633 "num_base_bdevs_discovered": 3, 00:41:47.633 "num_base_bdevs_operational": 3, 00:41:47.633 "base_bdevs_list": [ 00:41:47.633 { 00:41:47.633 "name": null, 00:41:47.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:47.633 "is_configured": false, 00:41:47.633 "data_offset": 2048, 00:41:47.633 "data_size": 63488 00:41:47.633 }, 00:41:47.633 { 00:41:47.633 "name": "pt2", 00:41:47.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:47.633 "is_configured": true, 00:41:47.633 "data_offset": 2048, 00:41:47.633 "data_size": 63488 00:41:47.633 }, 00:41:47.633 { 00:41:47.633 "name": "pt3", 00:41:47.633 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:47.633 "is_configured": true, 00:41:47.633 "data_offset": 2048, 00:41:47.633 "data_size": 63488 00:41:47.633 }, 00:41:47.633 { 00:41:47.633 "name": "pt4", 00:41:47.633 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:47.633 "is_configured": true, 00:41:47.633 "data_offset": 2048, 00:41:47.633 "data_size": 63488 00:41:47.633 } 00:41:47.633 ] 00:41:47.633 }' 00:41:47.633 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:47.633 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.894 [2024-11-26 17:38:48.512460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:47.894 [2024-11-26 17:38:48.512543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:47.894 [2024-11-26 17:38:48.512657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:47.894 [2024-11-26 17:38:48.512747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:47.894 [2024-11-26 17:38:48.512761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:47.894 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:48.158 [2024-11-26 17:38:48.588353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:41:48.158 [2024-11-26 17:38:48.588468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:48.158 [2024-11-26 17:38:48.588534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:41:48.158 [2024-11-26 17:38:48.588561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:48.158 [2024-11-26 17:38:48.591837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:48.158 [2024-11-26 17:38:48.591889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:41:48.158 [2024-11-26 17:38:48.592022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:41:48.158 [2024-11-26 17:38:48.592098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:41:48.158 [2024-11-26 17:38:48.592266] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:41:48.158 [2024-11-26 17:38:48.592284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:48.159 [2024-11-26 17:38:48.592306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:41:48.159 [2024-11-26 17:38:48.592392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:41:48.159 [2024-11-26 17:38:48.592637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:41:48.159 pt1 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:48.159 "name": "raid_bdev1", 00:41:48.159 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:48.159 "strip_size_kb": 64, 00:41:48.159 "state": "configuring", 00:41:48.159 "raid_level": "raid5f", 00:41:48.159 "superblock": true, 00:41:48.159 "num_base_bdevs": 4, 00:41:48.159 "num_base_bdevs_discovered": 2, 00:41:48.159 "num_base_bdevs_operational": 3, 00:41:48.159 "base_bdevs_list": [ 00:41:48.159 { 00:41:48.159 "name": null, 00:41:48.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:48.159 "is_configured": false, 00:41:48.159 "data_offset": 2048, 00:41:48.159 "data_size": 63488 00:41:48.159 }, 00:41:48.159 { 00:41:48.159 "name": "pt2", 00:41:48.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:48.159 "is_configured": true, 00:41:48.159 "data_offset": 2048, 00:41:48.159 "data_size": 63488 00:41:48.159 }, 00:41:48.159 { 00:41:48.159 "name": "pt3", 00:41:48.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:48.159 "is_configured": true, 00:41:48.159 "data_offset": 2048, 00:41:48.159 "data_size": 63488 00:41:48.159 }, 00:41:48.159 { 00:41:48.159 "name": null, 00:41:48.159 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:48.159 "is_configured": false, 00:41:48.159 "data_offset": 2048, 00:41:48.159 "data_size": 63488 00:41:48.159 } 00:41:48.159 ] 00:41:48.159 }' 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:48.159 17:38:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:48.418 [2024-11-26 17:38:49.075665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:41:48.418 [2024-11-26 17:38:49.075849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:48.418 [2024-11-26 17:38:49.075910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:41:48.418 [2024-11-26 17:38:49.075946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:48.418 [2024-11-26 17:38:49.076659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:48.418 [2024-11-26 17:38:49.076740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:41:48.418 [2024-11-26 17:38:49.076898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:41:48.418 [2024-11-26 17:38:49.076960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:41:48.418 [2024-11-26 17:38:49.077187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:41:48.418 [2024-11-26 17:38:49.077232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:41:48.418 [2024-11-26 17:38:49.077602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:41:48.418 [2024-11-26 17:38:49.085730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:41:48.418 [2024-11-26 17:38:49.085800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:41:48.418 [2024-11-26 17:38:49.086215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:48.418 pt4 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.418 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:48.688 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.688 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:48.689 "name": "raid_bdev1", 00:41:48.689 "uuid": "dda660b7-2fcc-4c14-b2ee-2835e2fe12a7", 00:41:48.689 "strip_size_kb": 64, 00:41:48.689 "state": "online", 00:41:48.689 "raid_level": "raid5f", 00:41:48.689 "superblock": true, 00:41:48.689 "num_base_bdevs": 4, 00:41:48.689 "num_base_bdevs_discovered": 3, 00:41:48.689 "num_base_bdevs_operational": 3, 00:41:48.689 "base_bdevs_list": [ 00:41:48.689 { 00:41:48.689 "name": null, 00:41:48.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:48.689 "is_configured": false, 00:41:48.689 "data_offset": 2048, 00:41:48.689 "data_size": 63488 00:41:48.689 }, 00:41:48.689 { 00:41:48.689 "name": "pt2", 00:41:48.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:41:48.689 "is_configured": true, 00:41:48.689 "data_offset": 2048, 00:41:48.689 "data_size": 63488 00:41:48.689 }, 00:41:48.689 { 00:41:48.689 "name": "pt3", 00:41:48.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:41:48.689 "is_configured": true, 00:41:48.689 "data_offset": 2048, 00:41:48.689 "data_size": 63488 00:41:48.689 }, 00:41:48.689 { 00:41:48.689 "name": "pt4", 00:41:48.689 "uuid": "00000000-0000-0000-0000-000000000004", 00:41:48.689 "is_configured": true, 00:41:48.689 "data_offset": 2048, 00:41:48.689 "data_size": 63488 00:41:48.689 } 00:41:48.689 ] 00:41:48.689 }' 00:41:48.689 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:48.689 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:48.955 [2024-11-26 17:38:49.584646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' dda660b7-2fcc-4c14-b2ee-2835e2fe12a7 '!=' dda660b7-2fcc-4c14-b2ee-2835e2fe12a7 ']' 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84412 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84412 ']' 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84412 00:41:48.955 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:41:48.956 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:48.956 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84412 00:41:48.956 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:48.956 killing process with pid 84412 00:41:48.956 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:48.956 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84412' 00:41:48.956 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84412 00:41:48.956 [2024-11-26 17:38:49.647219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:48.956 [2024-11-26 17:38:49.647351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:48.956 17:38:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84412 00:41:48.956 [2024-11-26 17:38:49.647457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:48.956 [2024-11-26 17:38:49.647478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:41:49.522 [2024-11-26 17:38:50.094568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:50.897 17:38:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:41:50.897 00:41:50.897 real 0m9.013s 00:41:50.897 user 0m13.840s 00:41:50.897 sys 0m1.912s 00:41:50.897 17:38:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:50.897 ************************************ 00:41:50.897 END TEST raid5f_superblock_test 00:41:50.897 ************************************ 00:41:50.897 17:38:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:41:50.897 17:38:51 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:41:50.897 17:38:51 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:41:50.897 17:38:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:41:50.897 17:38:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:50.897 17:38:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:41:50.897 ************************************ 00:41:50.897 START TEST raid5f_rebuild_test 00:41:50.897 ************************************ 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84903 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84903 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84903 ']' 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:50.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:50.897 17:38:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:50.897 [2024-11-26 17:38:51.544208] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:41:50.897 [2024-11-26 17:38:51.544449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84903 ] 00:41:50.897 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:50.897 Zero copy mechanism will not be used. 00:41:51.156 [2024-11-26 17:38:51.722459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:51.414 [2024-11-26 17:38:51.863115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:51.672 [2024-11-26 17:38:52.113276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:51.672 [2024-11-26 17:38:52.113447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:51.931 BaseBdev1_malloc 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:51.931 [2024-11-26 17:38:52.439963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:51.931 [2024-11-26 17:38:52.440042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:51.931 [2024-11-26 17:38:52.440070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:41:51.931 [2024-11-26 17:38:52.440082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:51.931 [2024-11-26 17:38:52.442596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:51.931 [2024-11-26 17:38:52.442636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:51.931 BaseBdev1 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:51.931 BaseBdev2_malloc 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.931 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:51.931 [2024-11-26 17:38:52.503739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:51.931 [2024-11-26 17:38:52.503815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:51.931 [2024-11-26 17:38:52.503841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:41:51.931 [2024-11-26 17:38:52.503853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:51.932 [2024-11-26 17:38:52.506328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:51.932 [2024-11-26 17:38:52.506449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:51.932 BaseBdev2 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:51.932 BaseBdev3_malloc 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:51.932 [2024-11-26 17:38:52.578231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:41:51.932 [2024-11-26 17:38:52.578373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:51.932 [2024-11-26 17:38:52.578405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:41:51.932 [2024-11-26 17:38:52.578418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:51.932 [2024-11-26 17:38:52.580977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:51.932 [2024-11-26 17:38:52.581023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:41:51.932 BaseBdev3 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:51.932 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.190 BaseBdev4_malloc 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.190 [2024-11-26 17:38:52.641087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:41:52.190 [2024-11-26 17:38:52.641159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:52.190 [2024-11-26 17:38:52.641185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:41:52.190 [2024-11-26 17:38:52.641196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:52.190 [2024-11-26 17:38:52.643673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:52.190 [2024-11-26 17:38:52.643797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:41:52.190 BaseBdev4 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.190 spare_malloc 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.190 spare_delay 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.190 [2024-11-26 17:38:52.715685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:52.190 [2024-11-26 17:38:52.715759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:52.190 [2024-11-26 17:38:52.715780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:41:52.190 [2024-11-26 17:38:52.715792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:52.190 [2024-11-26 17:38:52.718228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:52.190 [2024-11-26 17:38:52.718271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:52.190 spare 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.190 [2024-11-26 17:38:52.727752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:52.190 [2024-11-26 17:38:52.729925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:52.190 [2024-11-26 17:38:52.730063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:52.190 [2024-11-26 17:38:52.730124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:52.190 [2024-11-26 17:38:52.730221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:41:52.190 [2024-11-26 17:38:52.730234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:41:52.190 [2024-11-26 17:38:52.730533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:41:52.190 [2024-11-26 17:38:52.738206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:41:52.190 [2024-11-26 17:38:52.738265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:41:52.190 [2024-11-26 17:38:52.738556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.190 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:52.190 "name": "raid_bdev1", 00:41:52.190 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:41:52.190 "strip_size_kb": 64, 00:41:52.190 "state": "online", 00:41:52.190 "raid_level": "raid5f", 00:41:52.190 "superblock": false, 00:41:52.190 "num_base_bdevs": 4, 00:41:52.190 "num_base_bdevs_discovered": 4, 00:41:52.190 "num_base_bdevs_operational": 4, 00:41:52.190 "base_bdevs_list": [ 00:41:52.191 { 00:41:52.191 "name": "BaseBdev1", 00:41:52.191 "uuid": "d9a70b25-a550-543f-bd60-02a26730b110", 00:41:52.191 "is_configured": true, 00:41:52.191 "data_offset": 0, 00:41:52.191 "data_size": 65536 00:41:52.191 }, 00:41:52.191 { 00:41:52.191 "name": "BaseBdev2", 00:41:52.191 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:41:52.191 "is_configured": true, 00:41:52.191 "data_offset": 0, 00:41:52.191 "data_size": 65536 00:41:52.191 }, 00:41:52.191 { 00:41:52.191 "name": "BaseBdev3", 00:41:52.191 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:41:52.191 "is_configured": true, 00:41:52.191 "data_offset": 0, 00:41:52.191 "data_size": 65536 00:41:52.191 }, 00:41:52.191 { 00:41:52.191 "name": "BaseBdev4", 00:41:52.191 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:41:52.191 "is_configured": true, 00:41:52.191 "data_offset": 0, 00:41:52.191 "data_size": 65536 00:41:52.191 } 00:41:52.191 ] 00:41:52.191 }' 00:41:52.191 17:38:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:52.191 17:38:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.756 [2024-11-26 17:38:53.215733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:52.756 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:52.757 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:41:52.757 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:52.757 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:52.757 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:41:53.014 [2024-11-26 17:38:53.487100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:41:53.014 /dev/nbd0 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:53.014 1+0 records in 00:41:53.014 1+0 records out 00:41:53.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318161 s, 12.9 MB/s 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:41:53.014 17:38:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:41:53.578 512+0 records in 00:41:53.578 512+0 records out 00:41:53.578 100663296 bytes (101 MB, 96 MiB) copied, 0.508172 s, 198 MB/s 00:41:53.578 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:41:53.578 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:41:53.578 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:53.578 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:53.578 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:41:53.578 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:53.578 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:53.836 [2024-11-26 17:38:54.303252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:53.836 [2024-11-26 17:38:54.326986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:53.836 "name": "raid_bdev1", 00:41:53.836 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:41:53.836 "strip_size_kb": 64, 00:41:53.836 "state": "online", 00:41:53.836 "raid_level": "raid5f", 00:41:53.836 "superblock": false, 00:41:53.836 "num_base_bdevs": 4, 00:41:53.836 "num_base_bdevs_discovered": 3, 00:41:53.836 "num_base_bdevs_operational": 3, 00:41:53.836 "base_bdevs_list": [ 00:41:53.836 { 00:41:53.836 "name": null, 00:41:53.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:53.836 "is_configured": false, 00:41:53.836 "data_offset": 0, 00:41:53.836 "data_size": 65536 00:41:53.836 }, 00:41:53.836 { 00:41:53.836 "name": "BaseBdev2", 00:41:53.836 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:41:53.836 "is_configured": true, 00:41:53.836 "data_offset": 0, 00:41:53.836 "data_size": 65536 00:41:53.836 }, 00:41:53.836 { 00:41:53.836 "name": "BaseBdev3", 00:41:53.836 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:41:53.836 "is_configured": true, 00:41:53.836 "data_offset": 0, 00:41:53.836 "data_size": 65536 00:41:53.836 }, 00:41:53.836 { 00:41:53.836 "name": "BaseBdev4", 00:41:53.836 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:41:53.836 "is_configured": true, 00:41:53.836 "data_offset": 0, 00:41:53.836 "data_size": 65536 00:41:53.836 } 00:41:53.836 ] 00:41:53.836 }' 00:41:53.836 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:53.837 17:38:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:54.402 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:54.402 17:38:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.402 17:38:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:54.402 [2024-11-26 17:38:54.826180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:54.402 [2024-11-26 17:38:54.843590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:41:54.402 17:38:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.402 17:38:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:41:54.402 [2024-11-26 17:38:54.854373] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:55.401 "name": "raid_bdev1", 00:41:55.401 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:41:55.401 "strip_size_kb": 64, 00:41:55.401 "state": "online", 00:41:55.401 "raid_level": "raid5f", 00:41:55.401 "superblock": false, 00:41:55.401 "num_base_bdevs": 4, 00:41:55.401 "num_base_bdevs_discovered": 4, 00:41:55.401 "num_base_bdevs_operational": 4, 00:41:55.401 "process": { 00:41:55.401 "type": "rebuild", 00:41:55.401 "target": "spare", 00:41:55.401 "progress": { 00:41:55.401 "blocks": 19200, 00:41:55.401 "percent": 9 00:41:55.401 } 00:41:55.401 }, 00:41:55.401 "base_bdevs_list": [ 00:41:55.401 { 00:41:55.401 "name": "spare", 00:41:55.401 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:41:55.401 "is_configured": true, 00:41:55.401 "data_offset": 0, 00:41:55.401 "data_size": 65536 00:41:55.401 }, 00:41:55.401 { 00:41:55.401 "name": "BaseBdev2", 00:41:55.401 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:41:55.401 "is_configured": true, 00:41:55.401 "data_offset": 0, 00:41:55.401 "data_size": 65536 00:41:55.401 }, 00:41:55.401 { 00:41:55.401 "name": "BaseBdev3", 00:41:55.401 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:41:55.401 "is_configured": true, 00:41:55.401 "data_offset": 0, 00:41:55.401 "data_size": 65536 00:41:55.401 }, 00:41:55.401 { 00:41:55.401 "name": "BaseBdev4", 00:41:55.401 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:41:55.401 "is_configured": true, 00:41:55.401 "data_offset": 0, 00:41:55.401 "data_size": 65536 00:41:55.401 } 00:41:55.401 ] 00:41:55.401 }' 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.401 17:38:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:55.401 [2024-11-26 17:38:55.989143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:55.401 [2024-11-26 17:38:56.065176] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:55.401 [2024-11-26 17:38:56.065247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:55.401 [2024-11-26 17:38:56.065266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:55.401 [2024-11-26 17:38:56.065277] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:41:55.661 "name": "raid_bdev1", 00:41:55.661 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:41:55.661 "strip_size_kb": 64, 00:41:55.661 "state": "online", 00:41:55.661 "raid_level": "raid5f", 00:41:55.661 "superblock": false, 00:41:55.661 "num_base_bdevs": 4, 00:41:55.661 "num_base_bdevs_discovered": 3, 00:41:55.661 "num_base_bdevs_operational": 3, 00:41:55.661 "base_bdevs_list": [ 00:41:55.661 { 00:41:55.661 "name": null, 00:41:55.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:55.661 "is_configured": false, 00:41:55.661 "data_offset": 0, 00:41:55.661 "data_size": 65536 00:41:55.661 }, 00:41:55.661 { 00:41:55.661 "name": "BaseBdev2", 00:41:55.661 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:41:55.661 "is_configured": true, 00:41:55.661 "data_offset": 0, 00:41:55.661 "data_size": 65536 00:41:55.661 }, 00:41:55.661 { 00:41:55.661 "name": "BaseBdev3", 00:41:55.661 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:41:55.661 "is_configured": true, 00:41:55.661 "data_offset": 0, 00:41:55.661 "data_size": 65536 00:41:55.661 }, 00:41:55.661 { 00:41:55.661 "name": "BaseBdev4", 00:41:55.661 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:41:55.661 "is_configured": true, 00:41:55.661 "data_offset": 0, 00:41:55.661 "data_size": 65536 00:41:55.661 } 00:41:55.661 ] 00:41:55.661 }' 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:41:55.661 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:55.920 "name": "raid_bdev1", 00:41:55.920 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:41:55.920 "strip_size_kb": 64, 00:41:55.920 "state": "online", 00:41:55.920 "raid_level": "raid5f", 00:41:55.920 "superblock": false, 00:41:55.920 "num_base_bdevs": 4, 00:41:55.920 "num_base_bdevs_discovered": 3, 00:41:55.920 "num_base_bdevs_operational": 3, 00:41:55.920 "base_bdevs_list": [ 00:41:55.920 { 00:41:55.920 "name": null, 00:41:55.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:55.920 "is_configured": false, 00:41:55.920 "data_offset": 0, 00:41:55.920 "data_size": 65536 00:41:55.920 }, 00:41:55.920 { 00:41:55.920 "name": "BaseBdev2", 00:41:55.920 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:41:55.920 "is_configured": true, 00:41:55.920 "data_offset": 0, 00:41:55.920 "data_size": 65536 00:41:55.920 }, 00:41:55.920 { 00:41:55.920 "name": "BaseBdev3", 00:41:55.920 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:41:55.920 "is_configured": true, 00:41:55.920 "data_offset": 0, 00:41:55.920 "data_size": 65536 00:41:55.920 }, 00:41:55.920 { 00:41:55.920 "name": "BaseBdev4", 00:41:55.920 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:41:55.920 "is_configured": true, 00:41:55.920 "data_offset": 0, 00:41:55.920 "data_size": 65536 00:41:55.920 } 00:41:55.920 ] 00:41:55.920 }' 00:41:55.920 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:56.179 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:41:56.179 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:56.179 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:41:56.179 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:41:56.179 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.179 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:56.179 [2024-11-26 17:38:56.703845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:56.179 [2024-11-26 17:38:56.721373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:41:56.179 17:38:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.179 17:38:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:41:56.179 [2024-11-26 17:38:56.731918] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:57.116 "name": "raid_bdev1", 00:41:57.116 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:41:57.116 "strip_size_kb": 64, 00:41:57.116 "state": "online", 00:41:57.116 "raid_level": "raid5f", 00:41:57.116 "superblock": false, 00:41:57.116 "num_base_bdevs": 4, 00:41:57.116 "num_base_bdevs_discovered": 4, 00:41:57.116 "num_base_bdevs_operational": 4, 00:41:57.116 "process": { 00:41:57.116 "type": "rebuild", 00:41:57.116 "target": "spare", 00:41:57.116 "progress": { 00:41:57.116 "blocks": 17280, 00:41:57.116 "percent": 8 00:41:57.116 } 00:41:57.116 }, 00:41:57.116 "base_bdevs_list": [ 00:41:57.116 { 00:41:57.116 "name": "spare", 00:41:57.116 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:41:57.116 "is_configured": true, 00:41:57.116 "data_offset": 0, 00:41:57.116 "data_size": 65536 00:41:57.116 }, 00:41:57.116 { 00:41:57.116 "name": "BaseBdev2", 00:41:57.116 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:41:57.116 "is_configured": true, 00:41:57.116 "data_offset": 0, 00:41:57.116 "data_size": 65536 00:41:57.116 }, 00:41:57.116 { 00:41:57.116 "name": "BaseBdev3", 00:41:57.116 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:41:57.116 "is_configured": true, 00:41:57.116 "data_offset": 0, 00:41:57.116 "data_size": 65536 00:41:57.116 }, 00:41:57.116 { 00:41:57.116 "name": "BaseBdev4", 00:41:57.116 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:41:57.116 "is_configured": true, 00:41:57.116 "data_offset": 0, 00:41:57.116 "data_size": 65536 00:41:57.116 } 00:41:57.116 ] 00:41:57.116 }' 00:41:57.116 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:57.376 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:57.376 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:57.376 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:57.376 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:41:57.376 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=632 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:57.377 "name": "raid_bdev1", 00:41:57.377 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:41:57.377 "strip_size_kb": 64, 00:41:57.377 "state": "online", 00:41:57.377 "raid_level": "raid5f", 00:41:57.377 "superblock": false, 00:41:57.377 "num_base_bdevs": 4, 00:41:57.377 "num_base_bdevs_discovered": 4, 00:41:57.377 "num_base_bdevs_operational": 4, 00:41:57.377 "process": { 00:41:57.377 "type": "rebuild", 00:41:57.377 "target": "spare", 00:41:57.377 "progress": { 00:41:57.377 "blocks": 21120, 00:41:57.377 "percent": 10 00:41:57.377 } 00:41:57.377 }, 00:41:57.377 "base_bdevs_list": [ 00:41:57.377 { 00:41:57.377 "name": "spare", 00:41:57.377 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:41:57.377 "is_configured": true, 00:41:57.377 "data_offset": 0, 00:41:57.377 "data_size": 65536 00:41:57.377 }, 00:41:57.377 { 00:41:57.377 "name": "BaseBdev2", 00:41:57.377 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:41:57.377 "is_configured": true, 00:41:57.377 "data_offset": 0, 00:41:57.377 "data_size": 65536 00:41:57.377 }, 00:41:57.377 { 00:41:57.377 "name": "BaseBdev3", 00:41:57.377 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:41:57.377 "is_configured": true, 00:41:57.377 "data_offset": 0, 00:41:57.377 "data_size": 65536 00:41:57.377 }, 00:41:57.377 { 00:41:57.377 "name": "BaseBdev4", 00:41:57.377 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:41:57.377 "is_configured": true, 00:41:57.377 "data_offset": 0, 00:41:57.377 "data_size": 65536 00:41:57.377 } 00:41:57.377 ] 00:41:57.377 }' 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:57.377 17:38:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:57.377 17:38:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:57.377 17:38:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:58.757 "name": "raid_bdev1", 00:41:58.757 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:41:58.757 "strip_size_kb": 64, 00:41:58.757 "state": "online", 00:41:58.757 "raid_level": "raid5f", 00:41:58.757 "superblock": false, 00:41:58.757 "num_base_bdevs": 4, 00:41:58.757 "num_base_bdevs_discovered": 4, 00:41:58.757 "num_base_bdevs_operational": 4, 00:41:58.757 "process": { 00:41:58.757 "type": "rebuild", 00:41:58.757 "target": "spare", 00:41:58.757 "progress": { 00:41:58.757 "blocks": 42240, 00:41:58.757 "percent": 21 00:41:58.757 } 00:41:58.757 }, 00:41:58.757 "base_bdevs_list": [ 00:41:58.757 { 00:41:58.757 "name": "spare", 00:41:58.757 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:41:58.757 "is_configured": true, 00:41:58.757 "data_offset": 0, 00:41:58.757 "data_size": 65536 00:41:58.757 }, 00:41:58.757 { 00:41:58.757 "name": "BaseBdev2", 00:41:58.757 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:41:58.757 "is_configured": true, 00:41:58.757 "data_offset": 0, 00:41:58.757 "data_size": 65536 00:41:58.757 }, 00:41:58.757 { 00:41:58.757 "name": "BaseBdev3", 00:41:58.757 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:41:58.757 "is_configured": true, 00:41:58.757 "data_offset": 0, 00:41:58.757 "data_size": 65536 00:41:58.757 }, 00:41:58.757 { 00:41:58.757 "name": "BaseBdev4", 00:41:58.757 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:41:58.757 "is_configured": true, 00:41:58.757 "data_offset": 0, 00:41:58.757 "data_size": 65536 00:41:58.757 } 00:41:58.757 ] 00:41:58.757 }' 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:58.757 17:38:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:41:59.694 "name": "raid_bdev1", 00:41:59.694 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:41:59.694 "strip_size_kb": 64, 00:41:59.694 "state": "online", 00:41:59.694 "raid_level": "raid5f", 00:41:59.694 "superblock": false, 00:41:59.694 "num_base_bdevs": 4, 00:41:59.694 "num_base_bdevs_discovered": 4, 00:41:59.694 "num_base_bdevs_operational": 4, 00:41:59.694 "process": { 00:41:59.694 "type": "rebuild", 00:41:59.694 "target": "spare", 00:41:59.694 "progress": { 00:41:59.694 "blocks": 65280, 00:41:59.694 "percent": 33 00:41:59.694 } 00:41:59.694 }, 00:41:59.694 "base_bdevs_list": [ 00:41:59.694 { 00:41:59.694 "name": "spare", 00:41:59.694 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:41:59.694 "is_configured": true, 00:41:59.694 "data_offset": 0, 00:41:59.694 "data_size": 65536 00:41:59.694 }, 00:41:59.694 { 00:41:59.694 "name": "BaseBdev2", 00:41:59.694 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:41:59.694 "is_configured": true, 00:41:59.694 "data_offset": 0, 00:41:59.694 "data_size": 65536 00:41:59.694 }, 00:41:59.694 { 00:41:59.694 "name": "BaseBdev3", 00:41:59.694 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:41:59.694 "is_configured": true, 00:41:59.694 "data_offset": 0, 00:41:59.694 "data_size": 65536 00:41:59.694 }, 00:41:59.694 { 00:41:59.694 "name": "BaseBdev4", 00:41:59.694 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:41:59.694 "is_configured": true, 00:41:59.694 "data_offset": 0, 00:41:59.694 "data_size": 65536 00:41:59.694 } 00:41:59.694 ] 00:41:59.694 }' 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:41:59.694 17:39:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:00.634 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:00.634 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:00.634 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:00.634 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:00.634 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:00.634 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:00.634 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:00.634 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:00.634 17:39:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:00.634 17:39:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:00.894 17:39:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:00.894 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:00.894 "name": "raid_bdev1", 00:42:00.894 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:42:00.894 "strip_size_kb": 64, 00:42:00.894 "state": "online", 00:42:00.894 "raid_level": "raid5f", 00:42:00.894 "superblock": false, 00:42:00.894 "num_base_bdevs": 4, 00:42:00.894 "num_base_bdevs_discovered": 4, 00:42:00.894 "num_base_bdevs_operational": 4, 00:42:00.894 "process": { 00:42:00.894 "type": "rebuild", 00:42:00.894 "target": "spare", 00:42:00.894 "progress": { 00:42:00.894 "blocks": 86400, 00:42:00.894 "percent": 43 00:42:00.894 } 00:42:00.894 }, 00:42:00.894 "base_bdevs_list": [ 00:42:00.894 { 00:42:00.894 "name": "spare", 00:42:00.894 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:42:00.894 "is_configured": true, 00:42:00.894 "data_offset": 0, 00:42:00.894 "data_size": 65536 00:42:00.894 }, 00:42:00.894 { 00:42:00.894 "name": "BaseBdev2", 00:42:00.894 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:42:00.894 "is_configured": true, 00:42:00.894 "data_offset": 0, 00:42:00.894 "data_size": 65536 00:42:00.894 }, 00:42:00.894 { 00:42:00.894 "name": "BaseBdev3", 00:42:00.894 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:42:00.894 "is_configured": true, 00:42:00.894 "data_offset": 0, 00:42:00.894 "data_size": 65536 00:42:00.894 }, 00:42:00.894 { 00:42:00.894 "name": "BaseBdev4", 00:42:00.894 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:42:00.894 "is_configured": true, 00:42:00.894 "data_offset": 0, 00:42:00.894 "data_size": 65536 00:42:00.894 } 00:42:00.894 ] 00:42:00.894 }' 00:42:00.894 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:00.894 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:00.894 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:00.894 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:00.894 17:39:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:01.852 "name": "raid_bdev1", 00:42:01.852 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:42:01.852 "strip_size_kb": 64, 00:42:01.852 "state": "online", 00:42:01.852 "raid_level": "raid5f", 00:42:01.852 "superblock": false, 00:42:01.852 "num_base_bdevs": 4, 00:42:01.852 "num_base_bdevs_discovered": 4, 00:42:01.852 "num_base_bdevs_operational": 4, 00:42:01.852 "process": { 00:42:01.852 "type": "rebuild", 00:42:01.852 "target": "spare", 00:42:01.852 "progress": { 00:42:01.852 "blocks": 107520, 00:42:01.852 "percent": 54 00:42:01.852 } 00:42:01.852 }, 00:42:01.852 "base_bdevs_list": [ 00:42:01.852 { 00:42:01.852 "name": "spare", 00:42:01.852 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:42:01.852 "is_configured": true, 00:42:01.852 "data_offset": 0, 00:42:01.852 "data_size": 65536 00:42:01.852 }, 00:42:01.852 { 00:42:01.852 "name": "BaseBdev2", 00:42:01.852 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:42:01.852 "is_configured": true, 00:42:01.852 "data_offset": 0, 00:42:01.852 "data_size": 65536 00:42:01.852 }, 00:42:01.852 { 00:42:01.852 "name": "BaseBdev3", 00:42:01.852 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:42:01.852 "is_configured": true, 00:42:01.852 "data_offset": 0, 00:42:01.852 "data_size": 65536 00:42:01.852 }, 00:42:01.852 { 00:42:01.852 "name": "BaseBdev4", 00:42:01.852 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:42:01.852 "is_configured": true, 00:42:01.852 "data_offset": 0, 00:42:01.852 "data_size": 65536 00:42:01.852 } 00:42:01.852 ] 00:42:01.852 }' 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:01.852 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:02.112 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:02.112 17:39:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:03.050 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:03.051 "name": "raid_bdev1", 00:42:03.051 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:42:03.051 "strip_size_kb": 64, 00:42:03.051 "state": "online", 00:42:03.051 "raid_level": "raid5f", 00:42:03.051 "superblock": false, 00:42:03.051 "num_base_bdevs": 4, 00:42:03.051 "num_base_bdevs_discovered": 4, 00:42:03.051 "num_base_bdevs_operational": 4, 00:42:03.051 "process": { 00:42:03.051 "type": "rebuild", 00:42:03.051 "target": "spare", 00:42:03.051 "progress": { 00:42:03.051 "blocks": 130560, 00:42:03.051 "percent": 66 00:42:03.051 } 00:42:03.051 }, 00:42:03.051 "base_bdevs_list": [ 00:42:03.051 { 00:42:03.051 "name": "spare", 00:42:03.051 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:42:03.051 "is_configured": true, 00:42:03.051 "data_offset": 0, 00:42:03.051 "data_size": 65536 00:42:03.051 }, 00:42:03.051 { 00:42:03.051 "name": "BaseBdev2", 00:42:03.051 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:42:03.051 "is_configured": true, 00:42:03.051 "data_offset": 0, 00:42:03.051 "data_size": 65536 00:42:03.051 }, 00:42:03.051 { 00:42:03.051 "name": "BaseBdev3", 00:42:03.051 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:42:03.051 "is_configured": true, 00:42:03.051 "data_offset": 0, 00:42:03.051 "data_size": 65536 00:42:03.051 }, 00:42:03.051 { 00:42:03.051 "name": "BaseBdev4", 00:42:03.051 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:42:03.051 "is_configured": true, 00:42:03.051 "data_offset": 0, 00:42:03.051 "data_size": 65536 00:42:03.051 } 00:42:03.051 ] 00:42:03.051 }' 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:03.051 17:39:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:04.431 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:04.431 "name": "raid_bdev1", 00:42:04.431 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:42:04.431 "strip_size_kb": 64, 00:42:04.431 "state": "online", 00:42:04.431 "raid_level": "raid5f", 00:42:04.431 "superblock": false, 00:42:04.431 "num_base_bdevs": 4, 00:42:04.431 "num_base_bdevs_discovered": 4, 00:42:04.431 "num_base_bdevs_operational": 4, 00:42:04.431 "process": { 00:42:04.431 "type": "rebuild", 00:42:04.431 "target": "spare", 00:42:04.431 "progress": { 00:42:04.431 "blocks": 151680, 00:42:04.431 "percent": 77 00:42:04.431 } 00:42:04.431 }, 00:42:04.431 "base_bdevs_list": [ 00:42:04.431 { 00:42:04.431 "name": "spare", 00:42:04.431 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:42:04.431 "is_configured": true, 00:42:04.431 "data_offset": 0, 00:42:04.431 "data_size": 65536 00:42:04.431 }, 00:42:04.431 { 00:42:04.431 "name": "BaseBdev2", 00:42:04.432 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:42:04.432 "is_configured": true, 00:42:04.432 "data_offset": 0, 00:42:04.432 "data_size": 65536 00:42:04.432 }, 00:42:04.432 { 00:42:04.432 "name": "BaseBdev3", 00:42:04.432 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:42:04.432 "is_configured": true, 00:42:04.432 "data_offset": 0, 00:42:04.432 "data_size": 65536 00:42:04.432 }, 00:42:04.432 { 00:42:04.432 "name": "BaseBdev4", 00:42:04.432 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:42:04.432 "is_configured": true, 00:42:04.432 "data_offset": 0, 00:42:04.432 "data_size": 65536 00:42:04.432 } 00:42:04.432 ] 00:42:04.432 }' 00:42:04.432 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:04.432 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:04.432 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:04.432 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:04.432 17:39:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:05.371 "name": "raid_bdev1", 00:42:05.371 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:42:05.371 "strip_size_kb": 64, 00:42:05.371 "state": "online", 00:42:05.371 "raid_level": "raid5f", 00:42:05.371 "superblock": false, 00:42:05.371 "num_base_bdevs": 4, 00:42:05.371 "num_base_bdevs_discovered": 4, 00:42:05.371 "num_base_bdevs_operational": 4, 00:42:05.371 "process": { 00:42:05.371 "type": "rebuild", 00:42:05.371 "target": "spare", 00:42:05.371 "progress": { 00:42:05.371 "blocks": 174720, 00:42:05.371 "percent": 88 00:42:05.371 } 00:42:05.371 }, 00:42:05.371 "base_bdevs_list": [ 00:42:05.371 { 00:42:05.371 "name": "spare", 00:42:05.371 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:42:05.371 "is_configured": true, 00:42:05.371 "data_offset": 0, 00:42:05.371 "data_size": 65536 00:42:05.371 }, 00:42:05.371 { 00:42:05.371 "name": "BaseBdev2", 00:42:05.371 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:42:05.371 "is_configured": true, 00:42:05.371 "data_offset": 0, 00:42:05.371 "data_size": 65536 00:42:05.371 }, 00:42:05.371 { 00:42:05.371 "name": "BaseBdev3", 00:42:05.371 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:42:05.371 "is_configured": true, 00:42:05.371 "data_offset": 0, 00:42:05.371 "data_size": 65536 00:42:05.371 }, 00:42:05.371 { 00:42:05.371 "name": "BaseBdev4", 00:42:05.371 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:42:05.371 "is_configured": true, 00:42:05.371 "data_offset": 0, 00:42:05.371 "data_size": 65536 00:42:05.371 } 00:42:05.371 ] 00:42:05.371 }' 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:05.371 17:39:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:05.371 17:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:05.371 17:39:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:06.751 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:06.752 "name": "raid_bdev1", 00:42:06.752 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:42:06.752 "strip_size_kb": 64, 00:42:06.752 "state": "online", 00:42:06.752 "raid_level": "raid5f", 00:42:06.752 "superblock": false, 00:42:06.752 "num_base_bdevs": 4, 00:42:06.752 "num_base_bdevs_discovered": 4, 00:42:06.752 "num_base_bdevs_operational": 4, 00:42:06.752 "process": { 00:42:06.752 "type": "rebuild", 00:42:06.752 "target": "spare", 00:42:06.752 "progress": { 00:42:06.752 "blocks": 195840, 00:42:06.752 "percent": 99 00:42:06.752 } 00:42:06.752 }, 00:42:06.752 "base_bdevs_list": [ 00:42:06.752 { 00:42:06.752 "name": "spare", 00:42:06.752 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:42:06.752 "is_configured": true, 00:42:06.752 "data_offset": 0, 00:42:06.752 "data_size": 65536 00:42:06.752 }, 00:42:06.752 { 00:42:06.752 "name": "BaseBdev2", 00:42:06.752 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:42:06.752 "is_configured": true, 00:42:06.752 "data_offset": 0, 00:42:06.752 "data_size": 65536 00:42:06.752 }, 00:42:06.752 { 00:42:06.752 "name": "BaseBdev3", 00:42:06.752 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:42:06.752 "is_configured": true, 00:42:06.752 "data_offset": 0, 00:42:06.752 "data_size": 65536 00:42:06.752 }, 00:42:06.752 { 00:42:06.752 "name": "BaseBdev4", 00:42:06.752 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:42:06.752 "is_configured": true, 00:42:06.752 "data_offset": 0, 00:42:06.752 "data_size": 65536 00:42:06.752 } 00:42:06.752 ] 00:42:06.752 }' 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:06.752 [2024-11-26 17:39:07.106120] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:06.752 [2024-11-26 17:39:07.106250] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:06.752 [2024-11-26 17:39:07.106327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:06.752 17:39:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:07.778 "name": "raid_bdev1", 00:42:07.778 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:42:07.778 "strip_size_kb": 64, 00:42:07.778 "state": "online", 00:42:07.778 "raid_level": "raid5f", 00:42:07.778 "superblock": false, 00:42:07.778 "num_base_bdevs": 4, 00:42:07.778 "num_base_bdevs_discovered": 4, 00:42:07.778 "num_base_bdevs_operational": 4, 00:42:07.778 "base_bdevs_list": [ 00:42:07.778 { 00:42:07.778 "name": "spare", 00:42:07.778 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:42:07.778 "is_configured": true, 00:42:07.778 "data_offset": 0, 00:42:07.778 "data_size": 65536 00:42:07.778 }, 00:42:07.778 { 00:42:07.778 "name": "BaseBdev2", 00:42:07.778 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:42:07.778 "is_configured": true, 00:42:07.778 "data_offset": 0, 00:42:07.778 "data_size": 65536 00:42:07.778 }, 00:42:07.778 { 00:42:07.778 "name": "BaseBdev3", 00:42:07.778 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:42:07.778 "is_configured": true, 00:42:07.778 "data_offset": 0, 00:42:07.778 "data_size": 65536 00:42:07.778 }, 00:42:07.778 { 00:42:07.778 "name": "BaseBdev4", 00:42:07.778 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:42:07.778 "is_configured": true, 00:42:07.778 "data_offset": 0, 00:42:07.778 "data_size": 65536 00:42:07.778 } 00:42:07.778 ] 00:42:07.778 }' 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:07.778 "name": "raid_bdev1", 00:42:07.778 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:42:07.778 "strip_size_kb": 64, 00:42:07.778 "state": "online", 00:42:07.778 "raid_level": "raid5f", 00:42:07.778 "superblock": false, 00:42:07.778 "num_base_bdevs": 4, 00:42:07.778 "num_base_bdevs_discovered": 4, 00:42:07.778 "num_base_bdevs_operational": 4, 00:42:07.778 "base_bdevs_list": [ 00:42:07.778 { 00:42:07.778 "name": "spare", 00:42:07.778 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:42:07.778 "is_configured": true, 00:42:07.778 "data_offset": 0, 00:42:07.778 "data_size": 65536 00:42:07.778 }, 00:42:07.778 { 00:42:07.778 "name": "BaseBdev2", 00:42:07.778 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:42:07.778 "is_configured": true, 00:42:07.778 "data_offset": 0, 00:42:07.778 "data_size": 65536 00:42:07.778 }, 00:42:07.778 { 00:42:07.778 "name": "BaseBdev3", 00:42:07.778 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:42:07.778 "is_configured": true, 00:42:07.778 "data_offset": 0, 00:42:07.778 "data_size": 65536 00:42:07.778 }, 00:42:07.778 { 00:42:07.778 "name": "BaseBdev4", 00:42:07.778 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:42:07.778 "is_configured": true, 00:42:07.778 "data_offset": 0, 00:42:07.778 "data_size": 65536 00:42:07.778 } 00:42:07.778 ] 00:42:07.778 }' 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:07.778 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:07.779 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:07.779 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:07.779 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:07.779 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:07.779 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:07.779 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:07.779 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.038 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:08.038 "name": "raid_bdev1", 00:42:08.038 "uuid": "72aef380-58d1-498e-8b82-55973d3dd743", 00:42:08.038 "strip_size_kb": 64, 00:42:08.038 "state": "online", 00:42:08.038 "raid_level": "raid5f", 00:42:08.038 "superblock": false, 00:42:08.038 "num_base_bdevs": 4, 00:42:08.038 "num_base_bdevs_discovered": 4, 00:42:08.038 "num_base_bdevs_operational": 4, 00:42:08.038 "base_bdevs_list": [ 00:42:08.038 { 00:42:08.038 "name": "spare", 00:42:08.038 "uuid": "74f31db4-87e4-5991-b723-d85a06d4c4ff", 00:42:08.038 "is_configured": true, 00:42:08.038 "data_offset": 0, 00:42:08.038 "data_size": 65536 00:42:08.038 }, 00:42:08.038 { 00:42:08.038 "name": "BaseBdev2", 00:42:08.038 "uuid": "59ff7d36-c8f7-59cd-b9e1-88a29c162e2e", 00:42:08.038 "is_configured": true, 00:42:08.038 "data_offset": 0, 00:42:08.038 "data_size": 65536 00:42:08.038 }, 00:42:08.038 { 00:42:08.038 "name": "BaseBdev3", 00:42:08.038 "uuid": "e5fc7e3d-0eb4-585c-9fc8-4fba63c01c43", 00:42:08.038 "is_configured": true, 00:42:08.038 "data_offset": 0, 00:42:08.038 "data_size": 65536 00:42:08.038 }, 00:42:08.038 { 00:42:08.038 "name": "BaseBdev4", 00:42:08.038 "uuid": "79c0588e-87f6-5ac2-af97-d97134e78364", 00:42:08.038 "is_configured": true, 00:42:08.038 "data_offset": 0, 00:42:08.038 "data_size": 65536 00:42:08.038 } 00:42:08.038 ] 00:42:08.038 }' 00:42:08.038 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:08.038 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:08.298 [2024-11-26 17:39:08.896683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:08.298 [2024-11-26 17:39:08.896728] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:08.298 [2024-11-26 17:39:08.896842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:08.298 [2024-11-26 17:39:08.896955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:08.298 [2024-11-26 17:39:08.896967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:08.298 17:39:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:42:08.557 /dev/nbd0 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:08.557 1+0 records in 00:42:08.557 1+0 records out 00:42:08.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359613 s, 11.4 MB/s 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:08.557 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:42:08.558 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:08.558 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:08.558 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:42:08.558 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:08.558 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:08.558 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:42:08.816 /dev/nbd1 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:08.816 1+0 records in 00:42:08.816 1+0 records out 00:42:08.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311803 s, 13.1 MB/s 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:08.816 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:42:09.075 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:42:09.075 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:09.075 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:09.075 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:09.075 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:42:09.075 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:09.075 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:09.335 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:09.335 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:09.335 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:09.335 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:09.335 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:09.335 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:09.335 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:42:09.335 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:42:09.335 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:09.335 17:39:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84903 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84903 ']' 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84903 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84903 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84903' 00:42:09.595 killing process with pid 84903 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84903 00:42:09.595 Received shutdown signal, test time was about 60.000000 seconds 00:42:09.595 00:42:09.595 Latency(us) 00:42:09.595 [2024-11-26T17:39:10.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:09.595 [2024-11-26T17:39:10.290Z] =================================================================================================================== 00:42:09.595 [2024-11-26T17:39:10.290Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:09.595 [2024-11-26 17:39:10.164761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:09.595 17:39:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84903 00:42:10.164 [2024-11-26 17:39:10.691058] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:11.547 17:39:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:42:11.547 00:42:11.547 real 0m20.457s 00:42:11.547 user 0m24.196s 00:42:11.547 sys 0m2.567s 00:42:11.547 17:39:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:11.547 17:39:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:42:11.547 ************************************ 00:42:11.547 END TEST raid5f_rebuild_test 00:42:11.547 ************************************ 00:42:11.547 17:39:11 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:42:11.547 17:39:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:42:11.547 17:39:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:11.547 17:39:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:11.547 ************************************ 00:42:11.547 START TEST raid5f_rebuild_test_sb 00:42:11.547 ************************************ 00:42:11.547 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:42:11.547 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:42:11.547 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:42:11.547 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:42:11.547 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:42:11.547 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85425 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85425 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85425 ']' 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:11.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:11.548 17:39:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:11.548 [2024-11-26 17:39:12.066107] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:42:11.548 [2024-11-26 17:39:12.066347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85425 ] 00:42:11.548 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:11.548 Zero copy mechanism will not be used. 00:42:11.806 [2024-11-26 17:39:12.250577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:11.806 [2024-11-26 17:39:12.431020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:12.064 [2024-11-26 17:39:12.678177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:12.064 [2024-11-26 17:39:12.678353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.323 BaseBdev1_malloc 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.323 [2024-11-26 17:39:12.989709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:12.323 [2024-11-26 17:39:12.989832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:12.323 [2024-11-26 17:39:12.989875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:42:12.323 [2024-11-26 17:39:12.989920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:12.323 [2024-11-26 17:39:12.992344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:12.323 [2024-11-26 17:39:12.992430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:12.323 BaseBdev1 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.323 17:39:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.682 BaseBdev2_malloc 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.682 [2024-11-26 17:39:13.052933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:42:12.682 [2024-11-26 17:39:13.053078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:12.682 [2024-11-26 17:39:13.053132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:42:12.682 [2024-11-26 17:39:13.053179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:12.682 [2024-11-26 17:39:13.055750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:12.682 [2024-11-26 17:39:13.055820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:42:12.682 BaseBdev2 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.682 BaseBdev3_malloc 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.682 [2024-11-26 17:39:13.126381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:42:12.682 [2024-11-26 17:39:13.126489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:12.682 [2024-11-26 17:39:13.126544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:42:12.682 [2024-11-26 17:39:13.126589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:12.682 [2024-11-26 17:39:13.129042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:12.682 [2024-11-26 17:39:13.129114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:42:12.682 BaseBdev3 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.682 BaseBdev4_malloc 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.682 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.682 [2024-11-26 17:39:13.190996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:42:12.682 [2024-11-26 17:39:13.191079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:12.682 [2024-11-26 17:39:13.191108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:42:12.682 [2024-11-26 17:39:13.191121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:12.682 [2024-11-26 17:39:13.193854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:12.682 [2024-11-26 17:39:13.193978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:42:12.682 BaseBdev4 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.683 spare_malloc 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.683 spare_delay 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.683 [2024-11-26 17:39:13.269189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:12.683 [2024-11-26 17:39:13.269347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:12.683 [2024-11-26 17:39:13.269396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:42:12.683 [2024-11-26 17:39:13.269437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:12.683 [2024-11-26 17:39:13.272362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:12.683 [2024-11-26 17:39:13.272480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:12.683 spare 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.683 [2024-11-26 17:39:13.281290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:12.683 [2024-11-26 17:39:13.283684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:12.683 [2024-11-26 17:39:13.283777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:12.683 [2024-11-26 17:39:13.283880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:12.683 [2024-11-26 17:39:13.284169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:42:12.683 [2024-11-26 17:39:13.284226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:42:12.683 [2024-11-26 17:39:13.284597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:42:12.683 [2024-11-26 17:39:13.293039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:42:12.683 [2024-11-26 17:39:13.293102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:42:12.683 [2024-11-26 17:39:13.293424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:12.683 "name": "raid_bdev1", 00:42:12.683 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:12.683 "strip_size_kb": 64, 00:42:12.683 "state": "online", 00:42:12.683 "raid_level": "raid5f", 00:42:12.683 "superblock": true, 00:42:12.683 "num_base_bdevs": 4, 00:42:12.683 "num_base_bdevs_discovered": 4, 00:42:12.683 "num_base_bdevs_operational": 4, 00:42:12.683 "base_bdevs_list": [ 00:42:12.683 { 00:42:12.683 "name": "BaseBdev1", 00:42:12.683 "uuid": "d968ebfb-cb82-5516-8032-33b5e8b28034", 00:42:12.683 "is_configured": true, 00:42:12.683 "data_offset": 2048, 00:42:12.683 "data_size": 63488 00:42:12.683 }, 00:42:12.683 { 00:42:12.683 "name": "BaseBdev2", 00:42:12.683 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:12.683 "is_configured": true, 00:42:12.683 "data_offset": 2048, 00:42:12.683 "data_size": 63488 00:42:12.683 }, 00:42:12.683 { 00:42:12.683 "name": "BaseBdev3", 00:42:12.683 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:12.683 "is_configured": true, 00:42:12.683 "data_offset": 2048, 00:42:12.683 "data_size": 63488 00:42:12.683 }, 00:42:12.683 { 00:42:12.683 "name": "BaseBdev4", 00:42:12.683 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:12.683 "is_configured": true, 00:42:12.683 "data_offset": 2048, 00:42:12.683 "data_size": 63488 00:42:12.683 } 00:42:12.683 ] 00:42:12.683 }' 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:12.683 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:13.250 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:13.251 [2024-11-26 17:39:13.743741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:13.251 17:39:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:42:13.510 [2024-11-26 17:39:14.023065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:42:13.510 /dev/nbd0 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:13.510 1+0 records in 00:42:13.510 1+0 records out 00:42:13.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642555 s, 6.4 MB/s 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:42:13.510 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:42:14.078 496+0 records in 00:42:14.078 496+0 records out 00:42:14.078 97517568 bytes (98 MB, 93 MiB) copied, 0.531666 s, 183 MB/s 00:42:14.078 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:42:14.078 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:14.078 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:14.078 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:14.078 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:42:14.078 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:14.078 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:14.338 [2024-11-26 17:39:14.873890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:14.338 [2024-11-26 17:39:14.893264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:14.338 "name": "raid_bdev1", 00:42:14.338 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:14.338 "strip_size_kb": 64, 00:42:14.338 "state": "online", 00:42:14.338 "raid_level": "raid5f", 00:42:14.338 "superblock": true, 00:42:14.338 "num_base_bdevs": 4, 00:42:14.338 "num_base_bdevs_discovered": 3, 00:42:14.338 "num_base_bdevs_operational": 3, 00:42:14.338 "base_bdevs_list": [ 00:42:14.338 { 00:42:14.338 "name": null, 00:42:14.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:14.338 "is_configured": false, 00:42:14.338 "data_offset": 0, 00:42:14.338 "data_size": 63488 00:42:14.338 }, 00:42:14.338 { 00:42:14.338 "name": "BaseBdev2", 00:42:14.338 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:14.338 "is_configured": true, 00:42:14.338 "data_offset": 2048, 00:42:14.338 "data_size": 63488 00:42:14.338 }, 00:42:14.338 { 00:42:14.338 "name": "BaseBdev3", 00:42:14.338 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:14.338 "is_configured": true, 00:42:14.338 "data_offset": 2048, 00:42:14.338 "data_size": 63488 00:42:14.338 }, 00:42:14.338 { 00:42:14.338 "name": "BaseBdev4", 00:42:14.338 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:14.338 "is_configured": true, 00:42:14.338 "data_offset": 2048, 00:42:14.338 "data_size": 63488 00:42:14.338 } 00:42:14.338 ] 00:42:14.338 }' 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:14.338 17:39:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:14.904 17:39:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:14.904 17:39:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.904 17:39:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:14.904 [2024-11-26 17:39:15.376567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:14.904 [2024-11-26 17:39:15.395967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:42:14.904 17:39:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.904 17:39:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:42:14.904 [2024-11-26 17:39:15.407243] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:15.840 "name": "raid_bdev1", 00:42:15.840 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:15.840 "strip_size_kb": 64, 00:42:15.840 "state": "online", 00:42:15.840 "raid_level": "raid5f", 00:42:15.840 "superblock": true, 00:42:15.840 "num_base_bdevs": 4, 00:42:15.840 "num_base_bdevs_discovered": 4, 00:42:15.840 "num_base_bdevs_operational": 4, 00:42:15.840 "process": { 00:42:15.840 "type": "rebuild", 00:42:15.840 "target": "spare", 00:42:15.840 "progress": { 00:42:15.840 "blocks": 17280, 00:42:15.840 "percent": 9 00:42:15.840 } 00:42:15.840 }, 00:42:15.840 "base_bdevs_list": [ 00:42:15.840 { 00:42:15.840 "name": "spare", 00:42:15.840 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:15.840 "is_configured": true, 00:42:15.840 "data_offset": 2048, 00:42:15.840 "data_size": 63488 00:42:15.840 }, 00:42:15.840 { 00:42:15.840 "name": "BaseBdev2", 00:42:15.840 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:15.840 "is_configured": true, 00:42:15.840 "data_offset": 2048, 00:42:15.840 "data_size": 63488 00:42:15.840 }, 00:42:15.840 { 00:42:15.840 "name": "BaseBdev3", 00:42:15.840 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:15.840 "is_configured": true, 00:42:15.840 "data_offset": 2048, 00:42:15.840 "data_size": 63488 00:42:15.840 }, 00:42:15.840 { 00:42:15.840 "name": "BaseBdev4", 00:42:15.840 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:15.840 "is_configured": true, 00:42:15.840 "data_offset": 2048, 00:42:15.840 "data_size": 63488 00:42:15.840 } 00:42:15.840 ] 00:42:15.840 }' 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:15.840 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:16.099 [2024-11-26 17:39:16.539582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:16.099 [2024-11-26 17:39:16.619675] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:16.099 [2024-11-26 17:39:16.619887] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:16.099 [2024-11-26 17:39:16.619910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:16.099 [2024-11-26 17:39:16.619921] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:16.099 "name": "raid_bdev1", 00:42:16.099 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:16.099 "strip_size_kb": 64, 00:42:16.099 "state": "online", 00:42:16.099 "raid_level": "raid5f", 00:42:16.099 "superblock": true, 00:42:16.099 "num_base_bdevs": 4, 00:42:16.099 "num_base_bdevs_discovered": 3, 00:42:16.099 "num_base_bdevs_operational": 3, 00:42:16.099 "base_bdevs_list": [ 00:42:16.099 { 00:42:16.099 "name": null, 00:42:16.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:16.099 "is_configured": false, 00:42:16.099 "data_offset": 0, 00:42:16.099 "data_size": 63488 00:42:16.099 }, 00:42:16.099 { 00:42:16.099 "name": "BaseBdev2", 00:42:16.099 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:16.099 "is_configured": true, 00:42:16.099 "data_offset": 2048, 00:42:16.099 "data_size": 63488 00:42:16.099 }, 00:42:16.099 { 00:42:16.099 "name": "BaseBdev3", 00:42:16.099 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:16.099 "is_configured": true, 00:42:16.099 "data_offset": 2048, 00:42:16.099 "data_size": 63488 00:42:16.099 }, 00:42:16.099 { 00:42:16.099 "name": "BaseBdev4", 00:42:16.099 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:16.099 "is_configured": true, 00:42:16.099 "data_offset": 2048, 00:42:16.099 "data_size": 63488 00:42:16.099 } 00:42:16.099 ] 00:42:16.099 }' 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:16.099 17:39:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:16.666 "name": "raid_bdev1", 00:42:16.666 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:16.666 "strip_size_kb": 64, 00:42:16.666 "state": "online", 00:42:16.666 "raid_level": "raid5f", 00:42:16.666 "superblock": true, 00:42:16.666 "num_base_bdevs": 4, 00:42:16.666 "num_base_bdevs_discovered": 3, 00:42:16.666 "num_base_bdevs_operational": 3, 00:42:16.666 "base_bdevs_list": [ 00:42:16.666 { 00:42:16.666 "name": null, 00:42:16.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:16.666 "is_configured": false, 00:42:16.666 "data_offset": 0, 00:42:16.666 "data_size": 63488 00:42:16.666 }, 00:42:16.666 { 00:42:16.666 "name": "BaseBdev2", 00:42:16.666 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:16.666 "is_configured": true, 00:42:16.666 "data_offset": 2048, 00:42:16.666 "data_size": 63488 00:42:16.666 }, 00:42:16.666 { 00:42:16.666 "name": "BaseBdev3", 00:42:16.666 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:16.666 "is_configured": true, 00:42:16.666 "data_offset": 2048, 00:42:16.666 "data_size": 63488 00:42:16.666 }, 00:42:16.666 { 00:42:16.666 "name": "BaseBdev4", 00:42:16.666 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:16.666 "is_configured": true, 00:42:16.666 "data_offset": 2048, 00:42:16.666 "data_size": 63488 00:42:16.666 } 00:42:16.666 ] 00:42:16.666 }' 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.666 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:16.666 [2024-11-26 17:39:17.278667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:16.667 [2024-11-26 17:39:17.298606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:42:16.667 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.667 17:39:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:42:16.667 [2024-11-26 17:39:17.311371] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:18.040 "name": "raid_bdev1", 00:42:18.040 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:18.040 "strip_size_kb": 64, 00:42:18.040 "state": "online", 00:42:18.040 "raid_level": "raid5f", 00:42:18.040 "superblock": true, 00:42:18.040 "num_base_bdevs": 4, 00:42:18.040 "num_base_bdevs_discovered": 4, 00:42:18.040 "num_base_bdevs_operational": 4, 00:42:18.040 "process": { 00:42:18.040 "type": "rebuild", 00:42:18.040 "target": "spare", 00:42:18.040 "progress": { 00:42:18.040 "blocks": 17280, 00:42:18.040 "percent": 9 00:42:18.040 } 00:42:18.040 }, 00:42:18.040 "base_bdevs_list": [ 00:42:18.040 { 00:42:18.040 "name": "spare", 00:42:18.040 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:18.040 "is_configured": true, 00:42:18.040 "data_offset": 2048, 00:42:18.040 "data_size": 63488 00:42:18.040 }, 00:42:18.040 { 00:42:18.040 "name": "BaseBdev2", 00:42:18.040 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:18.040 "is_configured": true, 00:42:18.040 "data_offset": 2048, 00:42:18.040 "data_size": 63488 00:42:18.040 }, 00:42:18.040 { 00:42:18.040 "name": "BaseBdev3", 00:42:18.040 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:18.040 "is_configured": true, 00:42:18.040 "data_offset": 2048, 00:42:18.040 "data_size": 63488 00:42:18.040 }, 00:42:18.040 { 00:42:18.040 "name": "BaseBdev4", 00:42:18.040 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:18.040 "is_configured": true, 00:42:18.040 "data_offset": 2048, 00:42:18.040 "data_size": 63488 00:42:18.040 } 00:42:18.040 ] 00:42:18.040 }' 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:42:18.040 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=653 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:18.040 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:18.041 "name": "raid_bdev1", 00:42:18.041 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:18.041 "strip_size_kb": 64, 00:42:18.041 "state": "online", 00:42:18.041 "raid_level": "raid5f", 00:42:18.041 "superblock": true, 00:42:18.041 "num_base_bdevs": 4, 00:42:18.041 "num_base_bdevs_discovered": 4, 00:42:18.041 "num_base_bdevs_operational": 4, 00:42:18.041 "process": { 00:42:18.041 "type": "rebuild", 00:42:18.041 "target": "spare", 00:42:18.041 "progress": { 00:42:18.041 "blocks": 21120, 00:42:18.041 "percent": 11 00:42:18.041 } 00:42:18.041 }, 00:42:18.041 "base_bdevs_list": [ 00:42:18.041 { 00:42:18.041 "name": "spare", 00:42:18.041 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:18.041 "is_configured": true, 00:42:18.041 "data_offset": 2048, 00:42:18.041 "data_size": 63488 00:42:18.041 }, 00:42:18.041 { 00:42:18.041 "name": "BaseBdev2", 00:42:18.041 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:18.041 "is_configured": true, 00:42:18.041 "data_offset": 2048, 00:42:18.041 "data_size": 63488 00:42:18.041 }, 00:42:18.041 { 00:42:18.041 "name": "BaseBdev3", 00:42:18.041 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:18.041 "is_configured": true, 00:42:18.041 "data_offset": 2048, 00:42:18.041 "data_size": 63488 00:42:18.041 }, 00:42:18.041 { 00:42:18.041 "name": "BaseBdev4", 00:42:18.041 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:18.041 "is_configured": true, 00:42:18.041 "data_offset": 2048, 00:42:18.041 "data_size": 63488 00:42:18.041 } 00:42:18.041 ] 00:42:18.041 }' 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:18.041 17:39:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:18.996 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.260 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:19.260 "name": "raid_bdev1", 00:42:19.260 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:19.260 "strip_size_kb": 64, 00:42:19.260 "state": "online", 00:42:19.260 "raid_level": "raid5f", 00:42:19.260 "superblock": true, 00:42:19.260 "num_base_bdevs": 4, 00:42:19.260 "num_base_bdevs_discovered": 4, 00:42:19.260 "num_base_bdevs_operational": 4, 00:42:19.260 "process": { 00:42:19.260 "type": "rebuild", 00:42:19.260 "target": "spare", 00:42:19.260 "progress": { 00:42:19.260 "blocks": 44160, 00:42:19.260 "percent": 23 00:42:19.260 } 00:42:19.260 }, 00:42:19.260 "base_bdevs_list": [ 00:42:19.260 { 00:42:19.260 "name": "spare", 00:42:19.260 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:19.260 "is_configured": true, 00:42:19.260 "data_offset": 2048, 00:42:19.260 "data_size": 63488 00:42:19.260 }, 00:42:19.260 { 00:42:19.260 "name": "BaseBdev2", 00:42:19.260 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:19.260 "is_configured": true, 00:42:19.260 "data_offset": 2048, 00:42:19.260 "data_size": 63488 00:42:19.260 }, 00:42:19.260 { 00:42:19.260 "name": "BaseBdev3", 00:42:19.261 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:19.261 "is_configured": true, 00:42:19.261 "data_offset": 2048, 00:42:19.261 "data_size": 63488 00:42:19.261 }, 00:42:19.261 { 00:42:19.261 "name": "BaseBdev4", 00:42:19.261 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:19.261 "is_configured": true, 00:42:19.261 "data_offset": 2048, 00:42:19.261 "data_size": 63488 00:42:19.261 } 00:42:19.261 ] 00:42:19.261 }' 00:42:19.261 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:19.261 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:19.261 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:19.261 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:19.261 17:39:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:20.196 "name": "raid_bdev1", 00:42:20.196 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:20.196 "strip_size_kb": 64, 00:42:20.196 "state": "online", 00:42:20.196 "raid_level": "raid5f", 00:42:20.196 "superblock": true, 00:42:20.196 "num_base_bdevs": 4, 00:42:20.196 "num_base_bdevs_discovered": 4, 00:42:20.196 "num_base_bdevs_operational": 4, 00:42:20.196 "process": { 00:42:20.196 "type": "rebuild", 00:42:20.196 "target": "spare", 00:42:20.196 "progress": { 00:42:20.196 "blocks": 65280, 00:42:20.196 "percent": 34 00:42:20.196 } 00:42:20.196 }, 00:42:20.196 "base_bdevs_list": [ 00:42:20.196 { 00:42:20.196 "name": "spare", 00:42:20.196 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:20.196 "is_configured": true, 00:42:20.196 "data_offset": 2048, 00:42:20.196 "data_size": 63488 00:42:20.196 }, 00:42:20.196 { 00:42:20.196 "name": "BaseBdev2", 00:42:20.196 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:20.196 "is_configured": true, 00:42:20.196 "data_offset": 2048, 00:42:20.196 "data_size": 63488 00:42:20.196 }, 00:42:20.196 { 00:42:20.196 "name": "BaseBdev3", 00:42:20.196 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:20.196 "is_configured": true, 00:42:20.196 "data_offset": 2048, 00:42:20.196 "data_size": 63488 00:42:20.196 }, 00:42:20.196 { 00:42:20.196 "name": "BaseBdev4", 00:42:20.196 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:20.196 "is_configured": true, 00:42:20.196 "data_offset": 2048, 00:42:20.196 "data_size": 63488 00:42:20.196 } 00:42:20.196 ] 00:42:20.196 }' 00:42:20.196 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:20.455 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:20.455 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:20.455 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:20.455 17:39:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:21.390 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:21.390 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:21.390 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:21.391 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:21.391 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:21.391 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:21.391 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:21.391 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.391 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:21.391 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:21.391 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.391 17:39:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:21.391 "name": "raid_bdev1", 00:42:21.391 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:21.391 "strip_size_kb": 64, 00:42:21.391 "state": "online", 00:42:21.391 "raid_level": "raid5f", 00:42:21.391 "superblock": true, 00:42:21.391 "num_base_bdevs": 4, 00:42:21.391 "num_base_bdevs_discovered": 4, 00:42:21.391 "num_base_bdevs_operational": 4, 00:42:21.391 "process": { 00:42:21.391 "type": "rebuild", 00:42:21.391 "target": "spare", 00:42:21.391 "progress": { 00:42:21.391 "blocks": 86400, 00:42:21.391 "percent": 45 00:42:21.391 } 00:42:21.391 }, 00:42:21.391 "base_bdevs_list": [ 00:42:21.391 { 00:42:21.391 "name": "spare", 00:42:21.391 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:21.391 "is_configured": true, 00:42:21.391 "data_offset": 2048, 00:42:21.391 "data_size": 63488 00:42:21.391 }, 00:42:21.391 { 00:42:21.391 "name": "BaseBdev2", 00:42:21.391 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:21.391 "is_configured": true, 00:42:21.391 "data_offset": 2048, 00:42:21.391 "data_size": 63488 00:42:21.391 }, 00:42:21.391 { 00:42:21.391 "name": "BaseBdev3", 00:42:21.391 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:21.391 "is_configured": true, 00:42:21.391 "data_offset": 2048, 00:42:21.391 "data_size": 63488 00:42:21.391 }, 00:42:21.391 { 00:42:21.391 "name": "BaseBdev4", 00:42:21.391 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:21.391 "is_configured": true, 00:42:21.391 "data_offset": 2048, 00:42:21.391 "data_size": 63488 00:42:21.391 } 00:42:21.391 ] 00:42:21.391 }' 00:42:21.391 17:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:21.391 17:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:21.391 17:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:21.649 17:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:21.649 17:39:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:22.585 "name": "raid_bdev1", 00:42:22.585 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:22.585 "strip_size_kb": 64, 00:42:22.585 "state": "online", 00:42:22.585 "raid_level": "raid5f", 00:42:22.585 "superblock": true, 00:42:22.585 "num_base_bdevs": 4, 00:42:22.585 "num_base_bdevs_discovered": 4, 00:42:22.585 "num_base_bdevs_operational": 4, 00:42:22.585 "process": { 00:42:22.585 "type": "rebuild", 00:42:22.585 "target": "spare", 00:42:22.585 "progress": { 00:42:22.585 "blocks": 109440, 00:42:22.585 "percent": 57 00:42:22.585 } 00:42:22.585 }, 00:42:22.585 "base_bdevs_list": [ 00:42:22.585 { 00:42:22.585 "name": "spare", 00:42:22.585 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:22.585 "is_configured": true, 00:42:22.585 "data_offset": 2048, 00:42:22.585 "data_size": 63488 00:42:22.585 }, 00:42:22.585 { 00:42:22.585 "name": "BaseBdev2", 00:42:22.585 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:22.585 "is_configured": true, 00:42:22.585 "data_offset": 2048, 00:42:22.585 "data_size": 63488 00:42:22.585 }, 00:42:22.585 { 00:42:22.585 "name": "BaseBdev3", 00:42:22.585 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:22.585 "is_configured": true, 00:42:22.585 "data_offset": 2048, 00:42:22.585 "data_size": 63488 00:42:22.585 }, 00:42:22.585 { 00:42:22.585 "name": "BaseBdev4", 00:42:22.585 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:22.585 "is_configured": true, 00:42:22.585 "data_offset": 2048, 00:42:22.585 "data_size": 63488 00:42:22.585 } 00:42:22.585 ] 00:42:22.585 }' 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:22.585 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:22.586 17:39:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:23.962 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:23.963 "name": "raid_bdev1", 00:42:23.963 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:23.963 "strip_size_kb": 64, 00:42:23.963 "state": "online", 00:42:23.963 "raid_level": "raid5f", 00:42:23.963 "superblock": true, 00:42:23.963 "num_base_bdevs": 4, 00:42:23.963 "num_base_bdevs_discovered": 4, 00:42:23.963 "num_base_bdevs_operational": 4, 00:42:23.963 "process": { 00:42:23.963 "type": "rebuild", 00:42:23.963 "target": "spare", 00:42:23.963 "progress": { 00:42:23.963 "blocks": 130560, 00:42:23.963 "percent": 68 00:42:23.963 } 00:42:23.963 }, 00:42:23.963 "base_bdevs_list": [ 00:42:23.963 { 00:42:23.963 "name": "spare", 00:42:23.963 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:23.963 "is_configured": true, 00:42:23.963 "data_offset": 2048, 00:42:23.963 "data_size": 63488 00:42:23.963 }, 00:42:23.963 { 00:42:23.963 "name": "BaseBdev2", 00:42:23.963 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:23.963 "is_configured": true, 00:42:23.963 "data_offset": 2048, 00:42:23.963 "data_size": 63488 00:42:23.963 }, 00:42:23.963 { 00:42:23.963 "name": "BaseBdev3", 00:42:23.963 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:23.963 "is_configured": true, 00:42:23.963 "data_offset": 2048, 00:42:23.963 "data_size": 63488 00:42:23.963 }, 00:42:23.963 { 00:42:23.963 "name": "BaseBdev4", 00:42:23.963 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:23.963 "is_configured": true, 00:42:23.963 "data_offset": 2048, 00:42:23.963 "data_size": 63488 00:42:23.963 } 00:42:23.963 ] 00:42:23.963 }' 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:23.963 17:39:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:24.899 "name": "raid_bdev1", 00:42:24.899 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:24.899 "strip_size_kb": 64, 00:42:24.899 "state": "online", 00:42:24.899 "raid_level": "raid5f", 00:42:24.899 "superblock": true, 00:42:24.899 "num_base_bdevs": 4, 00:42:24.899 "num_base_bdevs_discovered": 4, 00:42:24.899 "num_base_bdevs_operational": 4, 00:42:24.899 "process": { 00:42:24.899 "type": "rebuild", 00:42:24.899 "target": "spare", 00:42:24.899 "progress": { 00:42:24.899 "blocks": 153600, 00:42:24.899 "percent": 80 00:42:24.899 } 00:42:24.899 }, 00:42:24.899 "base_bdevs_list": [ 00:42:24.899 { 00:42:24.899 "name": "spare", 00:42:24.899 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:24.899 "is_configured": true, 00:42:24.899 "data_offset": 2048, 00:42:24.899 "data_size": 63488 00:42:24.899 }, 00:42:24.899 { 00:42:24.899 "name": "BaseBdev2", 00:42:24.899 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:24.899 "is_configured": true, 00:42:24.899 "data_offset": 2048, 00:42:24.899 "data_size": 63488 00:42:24.899 }, 00:42:24.899 { 00:42:24.899 "name": "BaseBdev3", 00:42:24.899 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:24.899 "is_configured": true, 00:42:24.899 "data_offset": 2048, 00:42:24.899 "data_size": 63488 00:42:24.899 }, 00:42:24.899 { 00:42:24.899 "name": "BaseBdev4", 00:42:24.899 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:24.899 "is_configured": true, 00:42:24.899 "data_offset": 2048, 00:42:24.899 "data_size": 63488 00:42:24.899 } 00:42:24.899 ] 00:42:24.899 }' 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:24.899 17:39:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:25.834 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:26.093 "name": "raid_bdev1", 00:42:26.093 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:26.093 "strip_size_kb": 64, 00:42:26.093 "state": "online", 00:42:26.093 "raid_level": "raid5f", 00:42:26.093 "superblock": true, 00:42:26.093 "num_base_bdevs": 4, 00:42:26.093 "num_base_bdevs_discovered": 4, 00:42:26.093 "num_base_bdevs_operational": 4, 00:42:26.093 "process": { 00:42:26.093 "type": "rebuild", 00:42:26.093 "target": "spare", 00:42:26.093 "progress": { 00:42:26.093 "blocks": 174720, 00:42:26.093 "percent": 91 00:42:26.093 } 00:42:26.093 }, 00:42:26.093 "base_bdevs_list": [ 00:42:26.093 { 00:42:26.093 "name": "spare", 00:42:26.093 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:26.093 "is_configured": true, 00:42:26.093 "data_offset": 2048, 00:42:26.093 "data_size": 63488 00:42:26.093 }, 00:42:26.093 { 00:42:26.093 "name": "BaseBdev2", 00:42:26.093 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:26.093 "is_configured": true, 00:42:26.093 "data_offset": 2048, 00:42:26.093 "data_size": 63488 00:42:26.093 }, 00:42:26.093 { 00:42:26.093 "name": "BaseBdev3", 00:42:26.093 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:26.093 "is_configured": true, 00:42:26.093 "data_offset": 2048, 00:42:26.093 "data_size": 63488 00:42:26.093 }, 00:42:26.093 { 00:42:26.093 "name": "BaseBdev4", 00:42:26.093 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:26.093 "is_configured": true, 00:42:26.093 "data_offset": 2048, 00:42:26.093 "data_size": 63488 00:42:26.093 } 00:42:26.093 ] 00:42:26.093 }' 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:26.093 17:39:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:27.026 [2024-11-26 17:39:27.411483] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:27.026 [2024-11-26 17:39:27.411720] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:27.026 [2024-11-26 17:39:27.411900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:27.026 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:27.026 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:27.026 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:27.027 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:27.027 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:27.027 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:27.027 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:27.027 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:27.027 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.027 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:27.027 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:27.285 "name": "raid_bdev1", 00:42:27.285 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:27.285 "strip_size_kb": 64, 00:42:27.285 "state": "online", 00:42:27.285 "raid_level": "raid5f", 00:42:27.285 "superblock": true, 00:42:27.285 "num_base_bdevs": 4, 00:42:27.285 "num_base_bdevs_discovered": 4, 00:42:27.285 "num_base_bdevs_operational": 4, 00:42:27.285 "base_bdevs_list": [ 00:42:27.285 { 00:42:27.285 "name": "spare", 00:42:27.285 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:27.285 "is_configured": true, 00:42:27.285 "data_offset": 2048, 00:42:27.285 "data_size": 63488 00:42:27.285 }, 00:42:27.285 { 00:42:27.285 "name": "BaseBdev2", 00:42:27.285 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:27.285 "is_configured": true, 00:42:27.285 "data_offset": 2048, 00:42:27.285 "data_size": 63488 00:42:27.285 }, 00:42:27.285 { 00:42:27.285 "name": "BaseBdev3", 00:42:27.285 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:27.285 "is_configured": true, 00:42:27.285 "data_offset": 2048, 00:42:27.285 "data_size": 63488 00:42:27.285 }, 00:42:27.285 { 00:42:27.285 "name": "BaseBdev4", 00:42:27.285 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:27.285 "is_configured": true, 00:42:27.285 "data_offset": 2048, 00:42:27.285 "data_size": 63488 00:42:27.285 } 00:42:27.285 ] 00:42:27.285 }' 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.285 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:27.285 "name": "raid_bdev1", 00:42:27.285 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:27.285 "strip_size_kb": 64, 00:42:27.285 "state": "online", 00:42:27.285 "raid_level": "raid5f", 00:42:27.285 "superblock": true, 00:42:27.285 "num_base_bdevs": 4, 00:42:27.285 "num_base_bdevs_discovered": 4, 00:42:27.285 "num_base_bdevs_operational": 4, 00:42:27.285 "base_bdevs_list": [ 00:42:27.285 { 00:42:27.285 "name": "spare", 00:42:27.285 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:27.285 "is_configured": true, 00:42:27.285 "data_offset": 2048, 00:42:27.285 "data_size": 63488 00:42:27.285 }, 00:42:27.285 { 00:42:27.285 "name": "BaseBdev2", 00:42:27.285 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:27.285 "is_configured": true, 00:42:27.285 "data_offset": 2048, 00:42:27.285 "data_size": 63488 00:42:27.285 }, 00:42:27.285 { 00:42:27.286 "name": "BaseBdev3", 00:42:27.286 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:27.286 "is_configured": true, 00:42:27.286 "data_offset": 2048, 00:42:27.286 "data_size": 63488 00:42:27.286 }, 00:42:27.286 { 00:42:27.286 "name": "BaseBdev4", 00:42:27.286 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:27.286 "is_configured": true, 00:42:27.286 "data_offset": 2048, 00:42:27.286 "data_size": 63488 00:42:27.286 } 00:42:27.286 ] 00:42:27.286 }' 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.286 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:27.545 17:39:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.545 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:27.545 "name": "raid_bdev1", 00:42:27.545 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:27.545 "strip_size_kb": 64, 00:42:27.545 "state": "online", 00:42:27.545 "raid_level": "raid5f", 00:42:27.545 "superblock": true, 00:42:27.545 "num_base_bdevs": 4, 00:42:27.545 "num_base_bdevs_discovered": 4, 00:42:27.545 "num_base_bdevs_operational": 4, 00:42:27.545 "base_bdevs_list": [ 00:42:27.545 { 00:42:27.545 "name": "spare", 00:42:27.545 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:27.545 "is_configured": true, 00:42:27.545 "data_offset": 2048, 00:42:27.545 "data_size": 63488 00:42:27.545 }, 00:42:27.545 { 00:42:27.545 "name": "BaseBdev2", 00:42:27.545 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:27.545 "is_configured": true, 00:42:27.545 "data_offset": 2048, 00:42:27.545 "data_size": 63488 00:42:27.545 }, 00:42:27.545 { 00:42:27.545 "name": "BaseBdev3", 00:42:27.545 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:27.545 "is_configured": true, 00:42:27.545 "data_offset": 2048, 00:42:27.545 "data_size": 63488 00:42:27.545 }, 00:42:27.545 { 00:42:27.545 "name": "BaseBdev4", 00:42:27.545 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:27.545 "is_configured": true, 00:42:27.545 "data_offset": 2048, 00:42:27.545 "data_size": 63488 00:42:27.545 } 00:42:27.545 ] 00:42:27.545 }' 00:42:27.545 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:27.545 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:27.804 [2024-11-26 17:39:28.440663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:27.804 [2024-11-26 17:39:28.440710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:27.804 [2024-11-26 17:39:28.440825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:27.804 [2024-11-26 17:39:28.440953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:27.804 [2024-11-26 17:39:28.440986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:27.804 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:42:28.064 /dev/nbd0 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:28.064 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:28.064 1+0 records in 00:42:28.064 1+0 records out 00:42:28.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357566 s, 11.5 MB/s 00:42:28.322 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:28.323 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:42:28.323 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:28.323 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:28.323 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:42:28.323 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:28.323 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:28.323 17:39:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:42:28.323 /dev/nbd1 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:28.323 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:28.582 1+0 records in 00:42:28.582 1+0 records out 00:42:28.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404532 s, 10.1 MB/s 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:28.582 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:28.840 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:28.840 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:28.840 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:28.841 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:28.841 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:28.841 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:28.841 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:42:28.841 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:42:28.841 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:28.841 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.099 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:29.100 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.100 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:29.100 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.100 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:29.100 [2024-11-26 17:39:29.672640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:29.100 [2024-11-26 17:39:29.672713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:29.100 [2024-11-26 17:39:29.672743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:42:29.100 [2024-11-26 17:39:29.672753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:29.100 [2024-11-26 17:39:29.675637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:29.100 [2024-11-26 17:39:29.675681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:29.100 [2024-11-26 17:39:29.675794] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:29.100 [2024-11-26 17:39:29.675859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:29.100 [2024-11-26 17:39:29.676051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:29.100 [2024-11-26 17:39:29.676183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:29.100 [2024-11-26 17:39:29.676278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:29.100 spare 00:42:29.100 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.100 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:42:29.100 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.100 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:29.100 [2024-11-26 17:39:29.776210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:42:29.100 [2024-11-26 17:39:29.776289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:42:29.100 [2024-11-26 17:39:29.776784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:42:29.100 [2024-11-26 17:39:29.785045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:42:29.100 [2024-11-26 17:39:29.785072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:42:29.100 [2024-11-26 17:39:29.785332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:29.358 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.358 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:42:29.358 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:29.359 "name": "raid_bdev1", 00:42:29.359 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:29.359 "strip_size_kb": 64, 00:42:29.359 "state": "online", 00:42:29.359 "raid_level": "raid5f", 00:42:29.359 "superblock": true, 00:42:29.359 "num_base_bdevs": 4, 00:42:29.359 "num_base_bdevs_discovered": 4, 00:42:29.359 "num_base_bdevs_operational": 4, 00:42:29.359 "base_bdevs_list": [ 00:42:29.359 { 00:42:29.359 "name": "spare", 00:42:29.359 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:29.359 "is_configured": true, 00:42:29.359 "data_offset": 2048, 00:42:29.359 "data_size": 63488 00:42:29.359 }, 00:42:29.359 { 00:42:29.359 "name": "BaseBdev2", 00:42:29.359 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:29.359 "is_configured": true, 00:42:29.359 "data_offset": 2048, 00:42:29.359 "data_size": 63488 00:42:29.359 }, 00:42:29.359 { 00:42:29.359 "name": "BaseBdev3", 00:42:29.359 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:29.359 "is_configured": true, 00:42:29.359 "data_offset": 2048, 00:42:29.359 "data_size": 63488 00:42:29.359 }, 00:42:29.359 { 00:42:29.359 "name": "BaseBdev4", 00:42:29.359 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:29.359 "is_configured": true, 00:42:29.359 "data_offset": 2048, 00:42:29.359 "data_size": 63488 00:42:29.359 } 00:42:29.359 ] 00:42:29.359 }' 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:29.359 17:39:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:29.618 "name": "raid_bdev1", 00:42:29.618 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:29.618 "strip_size_kb": 64, 00:42:29.618 "state": "online", 00:42:29.618 "raid_level": "raid5f", 00:42:29.618 "superblock": true, 00:42:29.618 "num_base_bdevs": 4, 00:42:29.618 "num_base_bdevs_discovered": 4, 00:42:29.618 "num_base_bdevs_operational": 4, 00:42:29.618 "base_bdevs_list": [ 00:42:29.618 { 00:42:29.618 "name": "spare", 00:42:29.618 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:29.618 "is_configured": true, 00:42:29.618 "data_offset": 2048, 00:42:29.618 "data_size": 63488 00:42:29.618 }, 00:42:29.618 { 00:42:29.618 "name": "BaseBdev2", 00:42:29.618 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:29.618 "is_configured": true, 00:42:29.618 "data_offset": 2048, 00:42:29.618 "data_size": 63488 00:42:29.618 }, 00:42:29.618 { 00:42:29.618 "name": "BaseBdev3", 00:42:29.618 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:29.618 "is_configured": true, 00:42:29.618 "data_offset": 2048, 00:42:29.618 "data_size": 63488 00:42:29.618 }, 00:42:29.618 { 00:42:29.618 "name": "BaseBdev4", 00:42:29.618 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:29.618 "is_configured": true, 00:42:29.618 "data_offset": 2048, 00:42:29.618 "data_size": 63488 00:42:29.618 } 00:42:29.618 ] 00:42:29.618 }' 00:42:29.618 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:29.878 [2024-11-26 17:39:30.402332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.878 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:29.878 "name": "raid_bdev1", 00:42:29.878 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:29.878 "strip_size_kb": 64, 00:42:29.878 "state": "online", 00:42:29.878 "raid_level": "raid5f", 00:42:29.878 "superblock": true, 00:42:29.878 "num_base_bdevs": 4, 00:42:29.878 "num_base_bdevs_discovered": 3, 00:42:29.878 "num_base_bdevs_operational": 3, 00:42:29.878 "base_bdevs_list": [ 00:42:29.878 { 00:42:29.878 "name": null, 00:42:29.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:29.878 "is_configured": false, 00:42:29.878 "data_offset": 0, 00:42:29.878 "data_size": 63488 00:42:29.878 }, 00:42:29.878 { 00:42:29.879 "name": "BaseBdev2", 00:42:29.879 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:29.879 "is_configured": true, 00:42:29.879 "data_offset": 2048, 00:42:29.879 "data_size": 63488 00:42:29.879 }, 00:42:29.879 { 00:42:29.879 "name": "BaseBdev3", 00:42:29.879 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:29.879 "is_configured": true, 00:42:29.879 "data_offset": 2048, 00:42:29.879 "data_size": 63488 00:42:29.879 }, 00:42:29.879 { 00:42:29.879 "name": "BaseBdev4", 00:42:29.879 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:29.879 "is_configured": true, 00:42:29.879 "data_offset": 2048, 00:42:29.879 "data_size": 63488 00:42:29.879 } 00:42:29.879 ] 00:42:29.879 }' 00:42:29.879 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:29.879 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:30.139 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:30.139 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.139 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:30.139 [2024-11-26 17:39:30.821718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:30.139 [2024-11-26 17:39:30.822000] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:42:30.139 [2024-11-26 17:39:30.822028] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:30.139 [2024-11-26 17:39:30.822092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:30.397 [2024-11-26 17:39:30.837995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:42:30.397 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.397 17:39:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:42:30.397 [2024-11-26 17:39:30.847750] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:31.384 "name": "raid_bdev1", 00:42:31.384 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:31.384 "strip_size_kb": 64, 00:42:31.384 "state": "online", 00:42:31.384 "raid_level": "raid5f", 00:42:31.384 "superblock": true, 00:42:31.384 "num_base_bdevs": 4, 00:42:31.384 "num_base_bdevs_discovered": 4, 00:42:31.384 "num_base_bdevs_operational": 4, 00:42:31.384 "process": { 00:42:31.384 "type": "rebuild", 00:42:31.384 "target": "spare", 00:42:31.384 "progress": { 00:42:31.384 "blocks": 17280, 00:42:31.384 "percent": 9 00:42:31.384 } 00:42:31.384 }, 00:42:31.384 "base_bdevs_list": [ 00:42:31.384 { 00:42:31.384 "name": "spare", 00:42:31.384 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:31.384 "is_configured": true, 00:42:31.384 "data_offset": 2048, 00:42:31.384 "data_size": 63488 00:42:31.384 }, 00:42:31.384 { 00:42:31.384 "name": "BaseBdev2", 00:42:31.384 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:31.384 "is_configured": true, 00:42:31.384 "data_offset": 2048, 00:42:31.384 "data_size": 63488 00:42:31.384 }, 00:42:31.384 { 00:42:31.384 "name": "BaseBdev3", 00:42:31.384 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:31.384 "is_configured": true, 00:42:31.384 "data_offset": 2048, 00:42:31.384 "data_size": 63488 00:42:31.384 }, 00:42:31.384 { 00:42:31.384 "name": "BaseBdev4", 00:42:31.384 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:31.384 "is_configured": true, 00:42:31.384 "data_offset": 2048, 00:42:31.384 "data_size": 63488 00:42:31.384 } 00:42:31.384 ] 00:42:31.384 }' 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.384 17:39:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:31.384 [2024-11-26 17:39:31.999793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:31.384 [2024-11-26 17:39:32.059817] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:31.384 [2024-11-26 17:39:32.059904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:31.384 [2024-11-26 17:39:32.059926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:31.384 [2024-11-26 17:39:32.059942] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:31.643 "name": "raid_bdev1", 00:42:31.643 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:31.643 "strip_size_kb": 64, 00:42:31.643 "state": "online", 00:42:31.643 "raid_level": "raid5f", 00:42:31.643 "superblock": true, 00:42:31.643 "num_base_bdevs": 4, 00:42:31.643 "num_base_bdevs_discovered": 3, 00:42:31.643 "num_base_bdevs_operational": 3, 00:42:31.643 "base_bdevs_list": [ 00:42:31.643 { 00:42:31.643 "name": null, 00:42:31.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:31.643 "is_configured": false, 00:42:31.643 "data_offset": 0, 00:42:31.643 "data_size": 63488 00:42:31.643 }, 00:42:31.643 { 00:42:31.643 "name": "BaseBdev2", 00:42:31.643 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:31.643 "is_configured": true, 00:42:31.643 "data_offset": 2048, 00:42:31.643 "data_size": 63488 00:42:31.643 }, 00:42:31.643 { 00:42:31.643 "name": "BaseBdev3", 00:42:31.643 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:31.643 "is_configured": true, 00:42:31.643 "data_offset": 2048, 00:42:31.643 "data_size": 63488 00:42:31.643 }, 00:42:31.643 { 00:42:31.643 "name": "BaseBdev4", 00:42:31.643 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:31.643 "is_configured": true, 00:42:31.643 "data_offset": 2048, 00:42:31.643 "data_size": 63488 00:42:31.643 } 00:42:31.643 ] 00:42:31.643 }' 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:31.643 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:31.903 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:31.903 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:31.903 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:31.903 [2024-11-26 17:39:32.551137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:31.903 [2024-11-26 17:39:32.551242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:31.903 [2024-11-26 17:39:32.551281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:42:31.903 [2024-11-26 17:39:32.551297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:31.903 [2024-11-26 17:39:32.552024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:31.903 [2024-11-26 17:39:32.552060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:31.903 [2024-11-26 17:39:32.552196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:42:31.903 [2024-11-26 17:39:32.552222] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:42:31.903 [2024-11-26 17:39:32.552239] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:42:31.903 [2024-11-26 17:39:32.552273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:31.903 [2024-11-26 17:39:32.569872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:42:31.903 spare 00:42:31.903 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.903 17:39:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:42:31.903 [2024-11-26 17:39:32.580833] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:33.280 "name": "raid_bdev1", 00:42:33.280 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:33.280 "strip_size_kb": 64, 00:42:33.280 "state": "online", 00:42:33.280 "raid_level": "raid5f", 00:42:33.280 "superblock": true, 00:42:33.280 "num_base_bdevs": 4, 00:42:33.280 "num_base_bdevs_discovered": 4, 00:42:33.280 "num_base_bdevs_operational": 4, 00:42:33.280 "process": { 00:42:33.280 "type": "rebuild", 00:42:33.280 "target": "spare", 00:42:33.280 "progress": { 00:42:33.280 "blocks": 17280, 00:42:33.280 "percent": 9 00:42:33.280 } 00:42:33.280 }, 00:42:33.280 "base_bdevs_list": [ 00:42:33.280 { 00:42:33.280 "name": "spare", 00:42:33.280 "uuid": "ff97438e-cc11-57d3-8603-21e05f17ba6b", 00:42:33.280 "is_configured": true, 00:42:33.280 "data_offset": 2048, 00:42:33.280 "data_size": 63488 00:42:33.280 }, 00:42:33.280 { 00:42:33.280 "name": "BaseBdev2", 00:42:33.280 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:33.280 "is_configured": true, 00:42:33.280 "data_offset": 2048, 00:42:33.280 "data_size": 63488 00:42:33.280 }, 00:42:33.280 { 00:42:33.280 "name": "BaseBdev3", 00:42:33.280 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:33.280 "is_configured": true, 00:42:33.280 "data_offset": 2048, 00:42:33.280 "data_size": 63488 00:42:33.280 }, 00:42:33.280 { 00:42:33.280 "name": "BaseBdev4", 00:42:33.280 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:33.280 "is_configured": true, 00:42:33.280 "data_offset": 2048, 00:42:33.280 "data_size": 63488 00:42:33.280 } 00:42:33.280 ] 00:42:33.280 }' 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:33.280 [2024-11-26 17:39:33.732803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:33.280 [2024-11-26 17:39:33.791942] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:33.280 [2024-11-26 17:39:33.792025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:33.280 [2024-11-26 17:39:33.792052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:33.280 [2024-11-26 17:39:33.792061] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:33.280 "name": "raid_bdev1", 00:42:33.280 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:33.280 "strip_size_kb": 64, 00:42:33.280 "state": "online", 00:42:33.280 "raid_level": "raid5f", 00:42:33.280 "superblock": true, 00:42:33.280 "num_base_bdevs": 4, 00:42:33.280 "num_base_bdevs_discovered": 3, 00:42:33.280 "num_base_bdevs_operational": 3, 00:42:33.280 "base_bdevs_list": [ 00:42:33.280 { 00:42:33.280 "name": null, 00:42:33.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:33.280 "is_configured": false, 00:42:33.280 "data_offset": 0, 00:42:33.280 "data_size": 63488 00:42:33.280 }, 00:42:33.280 { 00:42:33.280 "name": "BaseBdev2", 00:42:33.280 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:33.280 "is_configured": true, 00:42:33.280 "data_offset": 2048, 00:42:33.280 "data_size": 63488 00:42:33.280 }, 00:42:33.280 { 00:42:33.280 "name": "BaseBdev3", 00:42:33.280 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:33.280 "is_configured": true, 00:42:33.280 "data_offset": 2048, 00:42:33.280 "data_size": 63488 00:42:33.280 }, 00:42:33.280 { 00:42:33.280 "name": "BaseBdev4", 00:42:33.280 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:33.280 "is_configured": true, 00:42:33.280 "data_offset": 2048, 00:42:33.280 "data_size": 63488 00:42:33.280 } 00:42:33.280 ] 00:42:33.280 }' 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:33.280 17:39:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:33.848 "name": "raid_bdev1", 00:42:33.848 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:33.848 "strip_size_kb": 64, 00:42:33.848 "state": "online", 00:42:33.848 "raid_level": "raid5f", 00:42:33.848 "superblock": true, 00:42:33.848 "num_base_bdevs": 4, 00:42:33.848 "num_base_bdevs_discovered": 3, 00:42:33.848 "num_base_bdevs_operational": 3, 00:42:33.848 "base_bdevs_list": [ 00:42:33.848 { 00:42:33.848 "name": null, 00:42:33.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:33.848 "is_configured": false, 00:42:33.848 "data_offset": 0, 00:42:33.848 "data_size": 63488 00:42:33.848 }, 00:42:33.848 { 00:42:33.848 "name": "BaseBdev2", 00:42:33.848 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:33.848 "is_configured": true, 00:42:33.848 "data_offset": 2048, 00:42:33.848 "data_size": 63488 00:42:33.848 }, 00:42:33.848 { 00:42:33.848 "name": "BaseBdev3", 00:42:33.848 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:33.848 "is_configured": true, 00:42:33.848 "data_offset": 2048, 00:42:33.848 "data_size": 63488 00:42:33.848 }, 00:42:33.848 { 00:42:33.848 "name": "BaseBdev4", 00:42:33.848 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:33.848 "is_configured": true, 00:42:33.848 "data_offset": 2048, 00:42:33.848 "data_size": 63488 00:42:33.848 } 00:42:33.848 ] 00:42:33.848 }' 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:33.848 [2024-11-26 17:39:34.425784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:33.848 [2024-11-26 17:39:34.425859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:33.848 [2024-11-26 17:39:34.425888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:42:33.848 [2024-11-26 17:39:34.425899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:33.848 [2024-11-26 17:39:34.426478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:33.848 [2024-11-26 17:39:34.426521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:33.848 [2024-11-26 17:39:34.426619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:42:33.848 [2024-11-26 17:39:34.426647] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:42:33.848 [2024-11-26 17:39:34.426662] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:33.848 [2024-11-26 17:39:34.426675] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:42:33.848 BaseBdev1 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.848 17:39:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:34.785 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.043 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:35.043 "name": "raid_bdev1", 00:42:35.043 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:35.043 "strip_size_kb": 64, 00:42:35.043 "state": "online", 00:42:35.043 "raid_level": "raid5f", 00:42:35.043 "superblock": true, 00:42:35.043 "num_base_bdevs": 4, 00:42:35.043 "num_base_bdevs_discovered": 3, 00:42:35.043 "num_base_bdevs_operational": 3, 00:42:35.043 "base_bdevs_list": [ 00:42:35.043 { 00:42:35.043 "name": null, 00:42:35.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:35.043 "is_configured": false, 00:42:35.043 "data_offset": 0, 00:42:35.043 "data_size": 63488 00:42:35.043 }, 00:42:35.043 { 00:42:35.043 "name": "BaseBdev2", 00:42:35.043 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:35.043 "is_configured": true, 00:42:35.044 "data_offset": 2048, 00:42:35.044 "data_size": 63488 00:42:35.044 }, 00:42:35.044 { 00:42:35.044 "name": "BaseBdev3", 00:42:35.044 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:35.044 "is_configured": true, 00:42:35.044 "data_offset": 2048, 00:42:35.044 "data_size": 63488 00:42:35.044 }, 00:42:35.044 { 00:42:35.044 "name": "BaseBdev4", 00:42:35.044 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:35.044 "is_configured": true, 00:42:35.044 "data_offset": 2048, 00:42:35.044 "data_size": 63488 00:42:35.044 } 00:42:35.044 ] 00:42:35.044 }' 00:42:35.044 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:35.044 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:35.303 "name": "raid_bdev1", 00:42:35.303 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:35.303 "strip_size_kb": 64, 00:42:35.303 "state": "online", 00:42:35.303 "raid_level": "raid5f", 00:42:35.303 "superblock": true, 00:42:35.303 "num_base_bdevs": 4, 00:42:35.303 "num_base_bdevs_discovered": 3, 00:42:35.303 "num_base_bdevs_operational": 3, 00:42:35.303 "base_bdevs_list": [ 00:42:35.303 { 00:42:35.303 "name": null, 00:42:35.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:35.303 "is_configured": false, 00:42:35.303 "data_offset": 0, 00:42:35.303 "data_size": 63488 00:42:35.303 }, 00:42:35.303 { 00:42:35.303 "name": "BaseBdev2", 00:42:35.303 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:35.303 "is_configured": true, 00:42:35.303 "data_offset": 2048, 00:42:35.303 "data_size": 63488 00:42:35.303 }, 00:42:35.303 { 00:42:35.303 "name": "BaseBdev3", 00:42:35.303 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:35.303 "is_configured": true, 00:42:35.303 "data_offset": 2048, 00:42:35.303 "data_size": 63488 00:42:35.303 }, 00:42:35.303 { 00:42:35.303 "name": "BaseBdev4", 00:42:35.303 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:35.303 "is_configured": true, 00:42:35.303 "data_offset": 2048, 00:42:35.303 "data_size": 63488 00:42:35.303 } 00:42:35.303 ] 00:42:35.303 }' 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:35.303 17:39:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.575 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:35.575 [2024-11-26 17:39:36.023384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:35.576 [2024-11-26 17:39:36.023642] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:42:35.576 [2024-11-26 17:39:36.023670] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:42:35.576 request: 00:42:35.576 { 00:42:35.576 "base_bdev": "BaseBdev1", 00:42:35.576 "raid_bdev": "raid_bdev1", 00:42:35.576 "method": "bdev_raid_add_base_bdev", 00:42:35.576 "req_id": 1 00:42:35.576 } 00:42:35.576 Got JSON-RPC error response 00:42:35.576 response: 00:42:35.576 { 00:42:35.576 "code": -22, 00:42:35.576 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:42:35.576 } 00:42:35.576 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:35.576 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:42:35.576 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:35.576 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:35.576 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:35.576 17:39:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:36.597 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:36.597 "name": "raid_bdev1", 00:42:36.597 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:36.597 "strip_size_kb": 64, 00:42:36.597 "state": "online", 00:42:36.597 "raid_level": "raid5f", 00:42:36.597 "superblock": true, 00:42:36.597 "num_base_bdevs": 4, 00:42:36.597 "num_base_bdevs_discovered": 3, 00:42:36.597 "num_base_bdevs_operational": 3, 00:42:36.598 "base_bdevs_list": [ 00:42:36.598 { 00:42:36.598 "name": null, 00:42:36.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:36.598 "is_configured": false, 00:42:36.598 "data_offset": 0, 00:42:36.598 "data_size": 63488 00:42:36.598 }, 00:42:36.598 { 00:42:36.598 "name": "BaseBdev2", 00:42:36.598 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:36.598 "is_configured": true, 00:42:36.598 "data_offset": 2048, 00:42:36.598 "data_size": 63488 00:42:36.598 }, 00:42:36.598 { 00:42:36.598 "name": "BaseBdev3", 00:42:36.598 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:36.598 "is_configured": true, 00:42:36.598 "data_offset": 2048, 00:42:36.598 "data_size": 63488 00:42:36.598 }, 00:42:36.598 { 00:42:36.598 "name": "BaseBdev4", 00:42:36.598 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:36.598 "is_configured": true, 00:42:36.598 "data_offset": 2048, 00:42:36.598 "data_size": 63488 00:42:36.598 } 00:42:36.598 ] 00:42:36.598 }' 00:42:36.598 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:36.598 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:36.857 "name": "raid_bdev1", 00:42:36.857 "uuid": "d2e2f0cd-9cdf-4e11-a6d0-dcf361da4956", 00:42:36.857 "strip_size_kb": 64, 00:42:36.857 "state": "online", 00:42:36.857 "raid_level": "raid5f", 00:42:36.857 "superblock": true, 00:42:36.857 "num_base_bdevs": 4, 00:42:36.857 "num_base_bdevs_discovered": 3, 00:42:36.857 "num_base_bdevs_operational": 3, 00:42:36.857 "base_bdevs_list": [ 00:42:36.857 { 00:42:36.857 "name": null, 00:42:36.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:36.857 "is_configured": false, 00:42:36.857 "data_offset": 0, 00:42:36.857 "data_size": 63488 00:42:36.857 }, 00:42:36.857 { 00:42:36.857 "name": "BaseBdev2", 00:42:36.857 "uuid": "69cf60c0-a211-5105-9973-cc2ec3459d02", 00:42:36.857 "is_configured": true, 00:42:36.857 "data_offset": 2048, 00:42:36.857 "data_size": 63488 00:42:36.857 }, 00:42:36.857 { 00:42:36.857 "name": "BaseBdev3", 00:42:36.857 "uuid": "7c3aee6b-4e5c-54a6-a1d2-2db07776ca80", 00:42:36.857 "is_configured": true, 00:42:36.857 "data_offset": 2048, 00:42:36.857 "data_size": 63488 00:42:36.857 }, 00:42:36.857 { 00:42:36.857 "name": "BaseBdev4", 00:42:36.857 "uuid": "e8d2b9b9-3e17-5df5-be5a-60012ebee856", 00:42:36.857 "is_configured": true, 00:42:36.857 "data_offset": 2048, 00:42:36.857 "data_size": 63488 00:42:36.857 } 00:42:36.857 ] 00:42:36.857 }' 00:42:36.857 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85425 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85425 ']' 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85425 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85425 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:37.116 killing process with pid 85425 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85425' 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85425 00:42:37.116 Received shutdown signal, test time was about 60.000000 seconds 00:42:37.116 00:42:37.116 Latency(us) 00:42:37.116 [2024-11-26T17:39:37.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:37.116 [2024-11-26T17:39:37.811Z] =================================================================================================================== 00:42:37.116 [2024-11-26T17:39:37.811Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:37.116 17:39:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85425 00:42:37.116 [2024-11-26 17:39:37.623817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:37.116 [2024-11-26 17:39:37.623990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:37.116 [2024-11-26 17:39:37.624108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:37.116 [2024-11-26 17:39:37.624124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:42:37.684 [2024-11-26 17:39:38.165802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:39.063 17:39:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:42:39.063 00:42:39.063 real 0m27.430s 00:42:39.063 user 0m34.118s 00:42:39.063 sys 0m3.295s 00:42:39.063 17:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:39.063 17:39:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:42:39.063 ************************************ 00:42:39.063 END TEST raid5f_rebuild_test_sb 00:42:39.063 ************************************ 00:42:39.063 17:39:39 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:42:39.063 17:39:39 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:42:39.063 17:39:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:42:39.063 17:39:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:39.063 17:39:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:39.063 ************************************ 00:42:39.063 START TEST raid_state_function_test_sb_4k 00:42:39.063 ************************************ 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86244 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86244' 00:42:39.063 Process raid pid: 86244 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86244 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86244 ']' 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:39.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:39.063 17:39:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:39.063 [2024-11-26 17:39:39.574721] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:42:39.063 [2024-11-26 17:39:39.574887] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:39.323 [2024-11-26 17:39:39.760931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:39.323 [2024-11-26 17:39:39.923116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.582 [2024-11-26 17:39:40.215981] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:39.582 [2024-11-26 17:39:40.216040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:39.841 [2024-11-26 17:39:40.506937] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:42:39.841 [2024-11-26 17:39:40.507022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:42:39.841 [2024-11-26 17:39:40.507034] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:39.841 [2024-11-26 17:39:40.507046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:39.841 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.099 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:40.099 "name": "Existed_Raid", 00:42:40.099 "uuid": "8c9e1640-be45-4d89-9f9f-70e4b3207d68", 00:42:40.099 "strip_size_kb": 0, 00:42:40.099 "state": "configuring", 00:42:40.099 "raid_level": "raid1", 00:42:40.099 "superblock": true, 00:42:40.099 "num_base_bdevs": 2, 00:42:40.099 "num_base_bdevs_discovered": 0, 00:42:40.099 "num_base_bdevs_operational": 2, 00:42:40.099 "base_bdevs_list": [ 00:42:40.099 { 00:42:40.099 "name": "BaseBdev1", 00:42:40.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:40.099 "is_configured": false, 00:42:40.099 "data_offset": 0, 00:42:40.099 "data_size": 0 00:42:40.099 }, 00:42:40.099 { 00:42:40.099 "name": "BaseBdev2", 00:42:40.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:40.099 "is_configured": false, 00:42:40.099 "data_offset": 0, 00:42:40.099 "data_size": 0 00:42:40.099 } 00:42:40.099 ] 00:42:40.099 }' 00:42:40.099 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:40.099 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.358 [2024-11-26 17:39:40.926223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:42:40.358 [2024-11-26 17:39:40.926278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.358 [2024-11-26 17:39:40.938171] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:42:40.358 [2024-11-26 17:39:40.938228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:42:40.358 [2024-11-26 17:39:40.938241] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:40.358 [2024-11-26 17:39:40.938256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.358 17:39:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.358 [2024-11-26 17:39:41.000294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:40.358 BaseBdev1 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.358 [ 00:42:40.358 { 00:42:40.358 "name": "BaseBdev1", 00:42:40.358 "aliases": [ 00:42:40.358 "7b752f46-9273-442f-8501-632253eaa701" 00:42:40.358 ], 00:42:40.358 "product_name": "Malloc disk", 00:42:40.358 "block_size": 4096, 00:42:40.358 "num_blocks": 8192, 00:42:40.358 "uuid": "7b752f46-9273-442f-8501-632253eaa701", 00:42:40.358 "assigned_rate_limits": { 00:42:40.358 "rw_ios_per_sec": 0, 00:42:40.358 "rw_mbytes_per_sec": 0, 00:42:40.358 "r_mbytes_per_sec": 0, 00:42:40.358 "w_mbytes_per_sec": 0 00:42:40.358 }, 00:42:40.358 "claimed": true, 00:42:40.358 "claim_type": "exclusive_write", 00:42:40.358 "zoned": false, 00:42:40.358 "supported_io_types": { 00:42:40.358 "read": true, 00:42:40.358 "write": true, 00:42:40.358 "unmap": true, 00:42:40.358 "flush": true, 00:42:40.358 "reset": true, 00:42:40.358 "nvme_admin": false, 00:42:40.358 "nvme_io": false, 00:42:40.358 "nvme_io_md": false, 00:42:40.358 "write_zeroes": true, 00:42:40.358 "zcopy": true, 00:42:40.358 "get_zone_info": false, 00:42:40.358 "zone_management": false, 00:42:40.358 "zone_append": false, 00:42:40.358 "compare": false, 00:42:40.358 "compare_and_write": false, 00:42:40.358 "abort": true, 00:42:40.358 "seek_hole": false, 00:42:40.358 "seek_data": false, 00:42:40.358 "copy": true, 00:42:40.358 "nvme_iov_md": false 00:42:40.358 }, 00:42:40.358 "memory_domains": [ 00:42:40.358 { 00:42:40.358 "dma_device_id": "system", 00:42:40.358 "dma_device_type": 1 00:42:40.358 }, 00:42:40.358 { 00:42:40.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:40.358 "dma_device_type": 2 00:42:40.358 } 00:42:40.358 ], 00:42:40.358 "driver_specific": {} 00:42:40.358 } 00:42:40.358 ] 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:40.358 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:40.359 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:40.359 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:40.359 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:40.359 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:40.359 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:40.359 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:40.359 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.359 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.359 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.617 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:40.617 "name": "Existed_Raid", 00:42:40.617 "uuid": "b19f0873-512e-4861-b341-bf7e9ae73b7e", 00:42:40.617 "strip_size_kb": 0, 00:42:40.617 "state": "configuring", 00:42:40.617 "raid_level": "raid1", 00:42:40.617 "superblock": true, 00:42:40.617 "num_base_bdevs": 2, 00:42:40.617 "num_base_bdevs_discovered": 1, 00:42:40.617 "num_base_bdevs_operational": 2, 00:42:40.617 "base_bdevs_list": [ 00:42:40.617 { 00:42:40.617 "name": "BaseBdev1", 00:42:40.617 "uuid": "7b752f46-9273-442f-8501-632253eaa701", 00:42:40.617 "is_configured": true, 00:42:40.617 "data_offset": 256, 00:42:40.617 "data_size": 7936 00:42:40.617 }, 00:42:40.617 { 00:42:40.617 "name": "BaseBdev2", 00:42:40.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:40.617 "is_configured": false, 00:42:40.617 "data_offset": 0, 00:42:40.617 "data_size": 0 00:42:40.617 } 00:42:40.617 ] 00:42:40.617 }' 00:42:40.617 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:40.617 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.876 [2024-11-26 17:39:41.523549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:42:40.876 [2024-11-26 17:39:41.523638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.876 [2024-11-26 17:39:41.535575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:40.876 [2024-11-26 17:39:41.538045] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:40.876 [2024-11-26 17:39:41.538105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:40.876 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.135 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:41.135 "name": "Existed_Raid", 00:42:41.135 "uuid": "613ba67a-eb4e-414a-8d46-4aa236a8d6b1", 00:42:41.135 "strip_size_kb": 0, 00:42:41.135 "state": "configuring", 00:42:41.135 "raid_level": "raid1", 00:42:41.135 "superblock": true, 00:42:41.135 "num_base_bdevs": 2, 00:42:41.135 "num_base_bdevs_discovered": 1, 00:42:41.135 "num_base_bdevs_operational": 2, 00:42:41.135 "base_bdevs_list": [ 00:42:41.135 { 00:42:41.135 "name": "BaseBdev1", 00:42:41.135 "uuid": "7b752f46-9273-442f-8501-632253eaa701", 00:42:41.135 "is_configured": true, 00:42:41.135 "data_offset": 256, 00:42:41.135 "data_size": 7936 00:42:41.135 }, 00:42:41.135 { 00:42:41.135 "name": "BaseBdev2", 00:42:41.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:41.135 "is_configured": false, 00:42:41.135 "data_offset": 0, 00:42:41.135 "data_size": 0 00:42:41.135 } 00:42:41.135 ] 00:42:41.135 }' 00:42:41.135 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:41.135 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:41.394 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:42:41.394 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.394 17:39:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:41.394 [2024-11-26 17:39:42.053251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:41.394 [2024-11-26 17:39:42.053648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:42:41.394 [2024-11-26 17:39:42.053675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:41.394 [2024-11-26 17:39:42.054007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:42:41.394 [2024-11-26 17:39:42.054223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:42:41.394 [2024-11-26 17:39:42.054248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:42:41.394 BaseBdev2 00:42:41.394 [2024-11-26 17:39:42.054453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.394 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:41.394 [ 00:42:41.394 { 00:42:41.394 "name": "BaseBdev2", 00:42:41.394 "aliases": [ 00:42:41.394 "00b7cbd2-975e-42e9-97f3-266d3db38a55" 00:42:41.394 ], 00:42:41.394 "product_name": "Malloc disk", 00:42:41.394 "block_size": 4096, 00:42:41.394 "num_blocks": 8192, 00:42:41.394 "uuid": "00b7cbd2-975e-42e9-97f3-266d3db38a55", 00:42:41.394 "assigned_rate_limits": { 00:42:41.394 "rw_ios_per_sec": 0, 00:42:41.394 "rw_mbytes_per_sec": 0, 00:42:41.394 "r_mbytes_per_sec": 0, 00:42:41.394 "w_mbytes_per_sec": 0 00:42:41.394 }, 00:42:41.394 "claimed": true, 00:42:41.394 "claim_type": "exclusive_write", 00:42:41.394 "zoned": false, 00:42:41.394 "supported_io_types": { 00:42:41.394 "read": true, 00:42:41.394 "write": true, 00:42:41.394 "unmap": true, 00:42:41.394 "flush": true, 00:42:41.394 "reset": true, 00:42:41.394 "nvme_admin": false, 00:42:41.394 "nvme_io": false, 00:42:41.394 "nvme_io_md": false, 00:42:41.394 "write_zeroes": true, 00:42:41.654 "zcopy": true, 00:42:41.654 "get_zone_info": false, 00:42:41.654 "zone_management": false, 00:42:41.654 "zone_append": false, 00:42:41.654 "compare": false, 00:42:41.654 "compare_and_write": false, 00:42:41.654 "abort": true, 00:42:41.654 "seek_hole": false, 00:42:41.654 "seek_data": false, 00:42:41.654 "copy": true, 00:42:41.654 "nvme_iov_md": false 00:42:41.654 }, 00:42:41.654 "memory_domains": [ 00:42:41.654 { 00:42:41.654 "dma_device_id": "system", 00:42:41.654 "dma_device_type": 1 00:42:41.654 }, 00:42:41.654 { 00:42:41.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:41.654 "dma_device_type": 2 00:42:41.654 } 00:42:41.654 ], 00:42:41.654 "driver_specific": {} 00:42:41.654 } 00:42:41.654 ] 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:41.654 "name": "Existed_Raid", 00:42:41.654 "uuid": "613ba67a-eb4e-414a-8d46-4aa236a8d6b1", 00:42:41.654 "strip_size_kb": 0, 00:42:41.654 "state": "online", 00:42:41.654 "raid_level": "raid1", 00:42:41.654 "superblock": true, 00:42:41.654 "num_base_bdevs": 2, 00:42:41.654 "num_base_bdevs_discovered": 2, 00:42:41.654 "num_base_bdevs_operational": 2, 00:42:41.654 "base_bdevs_list": [ 00:42:41.654 { 00:42:41.654 "name": "BaseBdev1", 00:42:41.654 "uuid": "7b752f46-9273-442f-8501-632253eaa701", 00:42:41.654 "is_configured": true, 00:42:41.654 "data_offset": 256, 00:42:41.654 "data_size": 7936 00:42:41.654 }, 00:42:41.654 { 00:42:41.654 "name": "BaseBdev2", 00:42:41.654 "uuid": "00b7cbd2-975e-42e9-97f3-266d3db38a55", 00:42:41.654 "is_configured": true, 00:42:41.654 "data_offset": 256, 00:42:41.654 "data_size": 7936 00:42:41.654 } 00:42:41.654 ] 00:42:41.654 }' 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:41.654 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:42:41.914 [2024-11-26 17:39:42.561021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:41.914 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:41.914 "name": "Existed_Raid", 00:42:41.914 "aliases": [ 00:42:41.914 "613ba67a-eb4e-414a-8d46-4aa236a8d6b1" 00:42:41.914 ], 00:42:41.914 "product_name": "Raid Volume", 00:42:41.914 "block_size": 4096, 00:42:41.914 "num_blocks": 7936, 00:42:41.914 "uuid": "613ba67a-eb4e-414a-8d46-4aa236a8d6b1", 00:42:41.914 "assigned_rate_limits": { 00:42:41.914 "rw_ios_per_sec": 0, 00:42:41.914 "rw_mbytes_per_sec": 0, 00:42:41.914 "r_mbytes_per_sec": 0, 00:42:41.914 "w_mbytes_per_sec": 0 00:42:41.914 }, 00:42:41.914 "claimed": false, 00:42:41.914 "zoned": false, 00:42:41.914 "supported_io_types": { 00:42:41.914 "read": true, 00:42:41.914 "write": true, 00:42:41.914 "unmap": false, 00:42:41.914 "flush": false, 00:42:41.914 "reset": true, 00:42:41.914 "nvme_admin": false, 00:42:41.914 "nvme_io": false, 00:42:41.914 "nvme_io_md": false, 00:42:41.914 "write_zeroes": true, 00:42:41.914 "zcopy": false, 00:42:41.914 "get_zone_info": false, 00:42:41.914 "zone_management": false, 00:42:41.914 "zone_append": false, 00:42:41.914 "compare": false, 00:42:41.914 "compare_and_write": false, 00:42:41.914 "abort": false, 00:42:41.914 "seek_hole": false, 00:42:41.914 "seek_data": false, 00:42:41.914 "copy": false, 00:42:41.914 "nvme_iov_md": false 00:42:41.914 }, 00:42:41.914 "memory_domains": [ 00:42:41.914 { 00:42:41.914 "dma_device_id": "system", 00:42:41.914 "dma_device_type": 1 00:42:41.914 }, 00:42:41.914 { 00:42:41.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:41.914 "dma_device_type": 2 00:42:41.914 }, 00:42:41.914 { 00:42:41.914 "dma_device_id": "system", 00:42:41.914 "dma_device_type": 1 00:42:41.914 }, 00:42:41.914 { 00:42:41.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:41.914 "dma_device_type": 2 00:42:41.914 } 00:42:41.914 ], 00:42:41.914 "driver_specific": { 00:42:41.914 "raid": { 00:42:41.914 "uuid": "613ba67a-eb4e-414a-8d46-4aa236a8d6b1", 00:42:41.914 "strip_size_kb": 0, 00:42:41.914 "state": "online", 00:42:41.914 "raid_level": "raid1", 00:42:41.915 "superblock": true, 00:42:41.915 "num_base_bdevs": 2, 00:42:41.915 "num_base_bdevs_discovered": 2, 00:42:41.915 "num_base_bdevs_operational": 2, 00:42:41.915 "base_bdevs_list": [ 00:42:41.915 { 00:42:41.915 "name": "BaseBdev1", 00:42:41.915 "uuid": "7b752f46-9273-442f-8501-632253eaa701", 00:42:41.915 "is_configured": true, 00:42:41.915 "data_offset": 256, 00:42:41.915 "data_size": 7936 00:42:41.915 }, 00:42:41.915 { 00:42:41.915 "name": "BaseBdev2", 00:42:41.915 "uuid": "00b7cbd2-975e-42e9-97f3-266d3db38a55", 00:42:41.915 "is_configured": true, 00:42:41.915 "data_offset": 256, 00:42:41.915 "data_size": 7936 00:42:41.915 } 00:42:41.915 ] 00:42:41.915 } 00:42:41.915 } 00:42:41.915 }' 00:42:41.915 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:42:42.174 BaseBdev2' 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.174 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:42.174 [2024-11-26 17:39:42.812419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.434 17:39:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:42.434 "name": "Existed_Raid", 00:42:42.434 "uuid": "613ba67a-eb4e-414a-8d46-4aa236a8d6b1", 00:42:42.434 "strip_size_kb": 0, 00:42:42.434 "state": "online", 00:42:42.434 "raid_level": "raid1", 00:42:42.434 "superblock": true, 00:42:42.434 "num_base_bdevs": 2, 00:42:42.434 "num_base_bdevs_discovered": 1, 00:42:42.434 "num_base_bdevs_operational": 1, 00:42:42.434 "base_bdevs_list": [ 00:42:42.434 { 00:42:42.434 "name": null, 00:42:42.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:42.434 "is_configured": false, 00:42:42.434 "data_offset": 0, 00:42:42.434 "data_size": 7936 00:42:42.434 }, 00:42:42.434 { 00:42:42.434 "name": "BaseBdev2", 00:42:42.434 "uuid": "00b7cbd2-975e-42e9-97f3-266d3db38a55", 00:42:42.434 "is_configured": true, 00:42:42.434 "data_offset": 256, 00:42:42.434 "data_size": 7936 00:42:42.434 } 00:42:42.434 ] 00:42:42.434 }' 00:42:42.434 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:42.434 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:42.693 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:42:42.693 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:42:42.951 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:42.951 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:42:42.951 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.951 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:42.951 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.951 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:42:42.951 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:42:42.951 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:42:42.951 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.951 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:42.951 [2024-11-26 17:39:43.443572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:42:42.951 [2024-11-26 17:39:43.443734] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:42.951 [2024-11-26 17:39:43.573051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:42.951 [2024-11-26 17:39:43.573121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:42.951 [2024-11-26 17:39:43.573138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86244 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86244 ']' 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86244 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:42.952 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86244 00:42:43.211 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:43.211 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:43.211 killing process with pid 86244 00:42:43.211 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86244' 00:42:43.211 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86244 00:42:43.211 [2024-11-26 17:39:43.668365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:43.211 17:39:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86244 00:42:43.211 [2024-11-26 17:39:43.691428] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:44.681 17:39:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:42:44.681 00:42:44.681 real 0m5.732s 00:42:44.681 user 0m7.929s 00:42:44.681 sys 0m1.068s 00:42:44.681 17:39:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:44.681 17:39:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:44.681 ************************************ 00:42:44.681 END TEST raid_state_function_test_sb_4k 00:42:44.681 ************************************ 00:42:44.681 17:39:45 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:42:44.681 17:39:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:44.681 17:39:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:44.681 17:39:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:44.681 ************************************ 00:42:44.681 START TEST raid_superblock_test_4k 00:42:44.681 ************************************ 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86501 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86501 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86501 ']' 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:44.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:44.681 17:39:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:44.963 [2024-11-26 17:39:45.376681] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:42:44.963 [2024-11-26 17:39:45.376835] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86501 ] 00:42:44.963 [2024-11-26 17:39:45.560301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:45.221 [2024-11-26 17:39:45.720985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:45.479 [2024-11-26 17:39:46.005951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:45.479 [2024-11-26 17:39:46.006007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:45.739 malloc1 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:45.739 [2024-11-26 17:39:46.315660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:45.739 [2024-11-26 17:39:46.315737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:45.739 [2024-11-26 17:39:46.315766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:42:45.739 [2024-11-26 17:39:46.315780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:45.739 [2024-11-26 17:39:46.318694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:45.739 [2024-11-26 17:39:46.318735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:45.739 pt1 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:45.739 malloc2 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:45.739 [2024-11-26 17:39:46.385121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:45.739 [2024-11-26 17:39:46.385296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:45.739 [2024-11-26 17:39:46.385358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:42:45.739 [2024-11-26 17:39:46.385405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:45.739 [2024-11-26 17:39:46.388547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:45.739 [2024-11-26 17:39:46.388628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:45.739 pt2 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:45.739 [2024-11-26 17:39:46.397324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:45.739 [2024-11-26 17:39:46.399880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:45.739 [2024-11-26 17:39:46.400166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:42:45.739 [2024-11-26 17:39:46.400225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:45.739 [2024-11-26 17:39:46.400586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:42:45.739 [2024-11-26 17:39:46.400834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:42:45.739 [2024-11-26 17:39:46.400890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:42:45.739 [2024-11-26 17:39:46.401140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:45.739 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:45.740 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.740 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:45.740 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.998 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:45.998 "name": "raid_bdev1", 00:42:45.998 "uuid": "2519e067-4050-40a1-a558-44d98d86b60f", 00:42:45.998 "strip_size_kb": 0, 00:42:45.999 "state": "online", 00:42:45.999 "raid_level": "raid1", 00:42:45.999 "superblock": true, 00:42:45.999 "num_base_bdevs": 2, 00:42:45.999 "num_base_bdevs_discovered": 2, 00:42:45.999 "num_base_bdevs_operational": 2, 00:42:45.999 "base_bdevs_list": [ 00:42:45.999 { 00:42:45.999 "name": "pt1", 00:42:45.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:45.999 "is_configured": true, 00:42:45.999 "data_offset": 256, 00:42:45.999 "data_size": 7936 00:42:45.999 }, 00:42:45.999 { 00:42:45.999 "name": "pt2", 00:42:45.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:45.999 "is_configured": true, 00:42:45.999 "data_offset": 256, 00:42:45.999 "data_size": 7936 00:42:45.999 } 00:42:45.999 ] 00:42:45.999 }' 00:42:45.999 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:45.999 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:42:46.258 [2024-11-26 17:39:46.929115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:46.258 17:39:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.517 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:46.517 "name": "raid_bdev1", 00:42:46.517 "aliases": [ 00:42:46.517 "2519e067-4050-40a1-a558-44d98d86b60f" 00:42:46.517 ], 00:42:46.517 "product_name": "Raid Volume", 00:42:46.517 "block_size": 4096, 00:42:46.517 "num_blocks": 7936, 00:42:46.517 "uuid": "2519e067-4050-40a1-a558-44d98d86b60f", 00:42:46.517 "assigned_rate_limits": { 00:42:46.517 "rw_ios_per_sec": 0, 00:42:46.517 "rw_mbytes_per_sec": 0, 00:42:46.517 "r_mbytes_per_sec": 0, 00:42:46.517 "w_mbytes_per_sec": 0 00:42:46.517 }, 00:42:46.517 "claimed": false, 00:42:46.517 "zoned": false, 00:42:46.517 "supported_io_types": { 00:42:46.517 "read": true, 00:42:46.517 "write": true, 00:42:46.517 "unmap": false, 00:42:46.517 "flush": false, 00:42:46.517 "reset": true, 00:42:46.517 "nvme_admin": false, 00:42:46.517 "nvme_io": false, 00:42:46.517 "nvme_io_md": false, 00:42:46.517 "write_zeroes": true, 00:42:46.517 "zcopy": false, 00:42:46.517 "get_zone_info": false, 00:42:46.517 "zone_management": false, 00:42:46.517 "zone_append": false, 00:42:46.517 "compare": false, 00:42:46.517 "compare_and_write": false, 00:42:46.517 "abort": false, 00:42:46.517 "seek_hole": false, 00:42:46.517 "seek_data": false, 00:42:46.517 "copy": false, 00:42:46.517 "nvme_iov_md": false 00:42:46.517 }, 00:42:46.517 "memory_domains": [ 00:42:46.517 { 00:42:46.517 "dma_device_id": "system", 00:42:46.517 "dma_device_type": 1 00:42:46.517 }, 00:42:46.517 { 00:42:46.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:46.517 "dma_device_type": 2 00:42:46.517 }, 00:42:46.517 { 00:42:46.517 "dma_device_id": "system", 00:42:46.517 "dma_device_type": 1 00:42:46.517 }, 00:42:46.517 { 00:42:46.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:46.517 "dma_device_type": 2 00:42:46.517 } 00:42:46.517 ], 00:42:46.517 "driver_specific": { 00:42:46.517 "raid": { 00:42:46.517 "uuid": "2519e067-4050-40a1-a558-44d98d86b60f", 00:42:46.517 "strip_size_kb": 0, 00:42:46.517 "state": "online", 00:42:46.517 "raid_level": "raid1", 00:42:46.517 "superblock": true, 00:42:46.517 "num_base_bdevs": 2, 00:42:46.517 "num_base_bdevs_discovered": 2, 00:42:46.517 "num_base_bdevs_operational": 2, 00:42:46.517 "base_bdevs_list": [ 00:42:46.517 { 00:42:46.517 "name": "pt1", 00:42:46.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:46.517 "is_configured": true, 00:42:46.517 "data_offset": 256, 00:42:46.517 "data_size": 7936 00:42:46.517 }, 00:42:46.517 { 00:42:46.517 "name": "pt2", 00:42:46.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:46.517 "is_configured": true, 00:42:46.517 "data_offset": 256, 00:42:46.517 "data_size": 7936 00:42:46.517 } 00:42:46.517 ] 00:42:46.517 } 00:42:46.517 } 00:42:46.517 }' 00:42:46.517 17:39:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:42:46.517 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:42:46.517 pt2' 00:42:46.517 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:46.517 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:42:46.517 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:46.517 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.518 [2024-11-26 17:39:47.164661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2519e067-4050-40a1-a558-44d98d86b60f 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 2519e067-4050-40a1-a558-44d98d86b60f ']' 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.518 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.777 [2024-11-26 17:39:47.212175] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:46.777 [2024-11-26 17:39:47.212258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:46.777 [2024-11-26 17:39:47.212421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:46.777 [2024-11-26 17:39:47.212551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:46.777 [2024-11-26 17:39:47.212614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.777 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.778 [2024-11-26 17:39:47.355979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:42:46.778 [2024-11-26 17:39:47.358657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:42:46.778 [2024-11-26 17:39:47.358747] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:42:46.778 [2024-11-26 17:39:47.358818] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:42:46.778 [2024-11-26 17:39:47.358837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:46.778 [2024-11-26 17:39:47.358852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:42:46.778 request: 00:42:46.778 { 00:42:46.778 "name": "raid_bdev1", 00:42:46.778 "raid_level": "raid1", 00:42:46.778 "base_bdevs": [ 00:42:46.778 "malloc1", 00:42:46.778 "malloc2" 00:42:46.778 ], 00:42:46.778 "superblock": false, 00:42:46.778 "method": "bdev_raid_create", 00:42:46.778 "req_id": 1 00:42:46.778 } 00:42:46.778 Got JSON-RPC error response 00:42:46.778 response: 00:42:46.778 { 00:42:46.778 "code": -17, 00:42:46.778 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:42:46.778 } 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.778 [2024-11-26 17:39:47.423852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:46.778 [2024-11-26 17:39:47.423976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:46.778 [2024-11-26 17:39:47.424023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:42:46.778 [2024-11-26 17:39:47.424064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:46.778 [2024-11-26 17:39:47.427146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:46.778 [2024-11-26 17:39:47.427235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:46.778 [2024-11-26 17:39:47.427374] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:42:46.778 [2024-11-26 17:39:47.427482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:46.778 pt1 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:46.778 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.036 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:47.036 "name": "raid_bdev1", 00:42:47.036 "uuid": "2519e067-4050-40a1-a558-44d98d86b60f", 00:42:47.036 "strip_size_kb": 0, 00:42:47.036 "state": "configuring", 00:42:47.036 "raid_level": "raid1", 00:42:47.036 "superblock": true, 00:42:47.036 "num_base_bdevs": 2, 00:42:47.036 "num_base_bdevs_discovered": 1, 00:42:47.036 "num_base_bdevs_operational": 2, 00:42:47.036 "base_bdevs_list": [ 00:42:47.036 { 00:42:47.036 "name": "pt1", 00:42:47.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:47.036 "is_configured": true, 00:42:47.036 "data_offset": 256, 00:42:47.036 "data_size": 7936 00:42:47.036 }, 00:42:47.036 { 00:42:47.036 "name": null, 00:42:47.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:47.036 "is_configured": false, 00:42:47.036 "data_offset": 256, 00:42:47.036 "data_size": 7936 00:42:47.036 } 00:42:47.036 ] 00:42:47.036 }' 00:42:47.036 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:47.036 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:47.295 [2024-11-26 17:39:47.939093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:47.295 [2024-11-26 17:39:47.939204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:47.295 [2024-11-26 17:39:47.939235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:42:47.295 [2024-11-26 17:39:47.939249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:47.295 [2024-11-26 17:39:47.939879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:47.295 [2024-11-26 17:39:47.939919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:47.295 [2024-11-26 17:39:47.940033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:42:47.295 [2024-11-26 17:39:47.940071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:47.295 [2024-11-26 17:39:47.940231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:42:47.295 [2024-11-26 17:39:47.940245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:47.295 [2024-11-26 17:39:47.940599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:42:47.295 [2024-11-26 17:39:47.940804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:42:47.295 [2024-11-26 17:39:47.940816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:42:47.295 [2024-11-26 17:39:47.941022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:47.295 pt2 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:47.295 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.554 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:47.554 "name": "raid_bdev1", 00:42:47.554 "uuid": "2519e067-4050-40a1-a558-44d98d86b60f", 00:42:47.554 "strip_size_kb": 0, 00:42:47.554 "state": "online", 00:42:47.554 "raid_level": "raid1", 00:42:47.554 "superblock": true, 00:42:47.554 "num_base_bdevs": 2, 00:42:47.554 "num_base_bdevs_discovered": 2, 00:42:47.554 "num_base_bdevs_operational": 2, 00:42:47.554 "base_bdevs_list": [ 00:42:47.554 { 00:42:47.554 "name": "pt1", 00:42:47.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:47.554 "is_configured": true, 00:42:47.554 "data_offset": 256, 00:42:47.554 "data_size": 7936 00:42:47.554 }, 00:42:47.554 { 00:42:47.554 "name": "pt2", 00:42:47.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:47.554 "is_configured": true, 00:42:47.554 "data_offset": 256, 00:42:47.554 "data_size": 7936 00:42:47.554 } 00:42:47.554 ] 00:42:47.554 }' 00:42:47.554 17:39:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:47.554 17:39:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:47.813 [2024-11-26 17:39:48.394693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:42:47.813 "name": "raid_bdev1", 00:42:47.813 "aliases": [ 00:42:47.813 "2519e067-4050-40a1-a558-44d98d86b60f" 00:42:47.813 ], 00:42:47.813 "product_name": "Raid Volume", 00:42:47.813 "block_size": 4096, 00:42:47.813 "num_blocks": 7936, 00:42:47.813 "uuid": "2519e067-4050-40a1-a558-44d98d86b60f", 00:42:47.813 "assigned_rate_limits": { 00:42:47.813 "rw_ios_per_sec": 0, 00:42:47.813 "rw_mbytes_per_sec": 0, 00:42:47.813 "r_mbytes_per_sec": 0, 00:42:47.813 "w_mbytes_per_sec": 0 00:42:47.813 }, 00:42:47.813 "claimed": false, 00:42:47.813 "zoned": false, 00:42:47.813 "supported_io_types": { 00:42:47.813 "read": true, 00:42:47.813 "write": true, 00:42:47.813 "unmap": false, 00:42:47.813 "flush": false, 00:42:47.813 "reset": true, 00:42:47.813 "nvme_admin": false, 00:42:47.813 "nvme_io": false, 00:42:47.813 "nvme_io_md": false, 00:42:47.813 "write_zeroes": true, 00:42:47.813 "zcopy": false, 00:42:47.813 "get_zone_info": false, 00:42:47.813 "zone_management": false, 00:42:47.813 "zone_append": false, 00:42:47.813 "compare": false, 00:42:47.813 "compare_and_write": false, 00:42:47.813 "abort": false, 00:42:47.813 "seek_hole": false, 00:42:47.813 "seek_data": false, 00:42:47.813 "copy": false, 00:42:47.813 "nvme_iov_md": false 00:42:47.813 }, 00:42:47.813 "memory_domains": [ 00:42:47.813 { 00:42:47.813 "dma_device_id": "system", 00:42:47.813 "dma_device_type": 1 00:42:47.813 }, 00:42:47.813 { 00:42:47.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:47.813 "dma_device_type": 2 00:42:47.813 }, 00:42:47.813 { 00:42:47.813 "dma_device_id": "system", 00:42:47.813 "dma_device_type": 1 00:42:47.813 }, 00:42:47.813 { 00:42:47.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:47.813 "dma_device_type": 2 00:42:47.813 } 00:42:47.813 ], 00:42:47.813 "driver_specific": { 00:42:47.813 "raid": { 00:42:47.813 "uuid": "2519e067-4050-40a1-a558-44d98d86b60f", 00:42:47.813 "strip_size_kb": 0, 00:42:47.813 "state": "online", 00:42:47.813 "raid_level": "raid1", 00:42:47.813 "superblock": true, 00:42:47.813 "num_base_bdevs": 2, 00:42:47.813 "num_base_bdevs_discovered": 2, 00:42:47.813 "num_base_bdevs_operational": 2, 00:42:47.813 "base_bdevs_list": [ 00:42:47.813 { 00:42:47.813 "name": "pt1", 00:42:47.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:42:47.813 "is_configured": true, 00:42:47.813 "data_offset": 256, 00:42:47.813 "data_size": 7936 00:42:47.813 }, 00:42:47.813 { 00:42:47.813 "name": "pt2", 00:42:47.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:47.813 "is_configured": true, 00:42:47.813 "data_offset": 256, 00:42:47.813 "data_size": 7936 00:42:47.813 } 00:42:47.813 ] 00:42:47.813 } 00:42:47.813 } 00:42:47.813 }' 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:42:47.813 pt2' 00:42:47.813 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.071 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.072 [2024-11-26 17:39:48.630260] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 2519e067-4050-40a1-a558-44d98d86b60f '!=' 2519e067-4050-40a1-a558-44d98d86b60f ']' 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.072 [2024-11-26 17:39:48.677926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:48.072 "name": "raid_bdev1", 00:42:48.072 "uuid": "2519e067-4050-40a1-a558-44d98d86b60f", 00:42:48.072 "strip_size_kb": 0, 00:42:48.072 "state": "online", 00:42:48.072 "raid_level": "raid1", 00:42:48.072 "superblock": true, 00:42:48.072 "num_base_bdevs": 2, 00:42:48.072 "num_base_bdevs_discovered": 1, 00:42:48.072 "num_base_bdevs_operational": 1, 00:42:48.072 "base_bdevs_list": [ 00:42:48.072 { 00:42:48.072 "name": null, 00:42:48.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:48.072 "is_configured": false, 00:42:48.072 "data_offset": 0, 00:42:48.072 "data_size": 7936 00:42:48.072 }, 00:42:48.072 { 00:42:48.072 "name": "pt2", 00:42:48.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:48.072 "is_configured": true, 00:42:48.072 "data_offset": 256, 00:42:48.072 "data_size": 7936 00:42:48.072 } 00:42:48.072 ] 00:42:48.072 }' 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:48.072 17:39:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.637 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:48.637 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.637 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.637 [2024-11-26 17:39:49.193029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:48.637 [2024-11-26 17:39:49.193158] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:48.637 [2024-11-26 17:39:49.193302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:48.638 [2024-11-26 17:39:49.193395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:48.638 [2024-11-26 17:39:49.193452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.638 [2024-11-26 17:39:49.256869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:42:48.638 [2024-11-26 17:39:49.256957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:48.638 [2024-11-26 17:39:49.256980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:42:48.638 [2024-11-26 17:39:49.256994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:48.638 [2024-11-26 17:39:49.260002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:48.638 pt2 00:42:48.638 [2024-11-26 17:39:49.260116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:42:48.638 [2024-11-26 17:39:49.260237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:42:48.638 [2024-11-26 17:39:49.260304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:48.638 [2024-11-26 17:39:49.260446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:42:48.638 [2024-11-26 17:39:49.260461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:48.638 [2024-11-26 17:39:49.260792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:42:48.638 [2024-11-26 17:39:49.260986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:42:48.638 [2024-11-26 17:39:49.260998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:42:48.638 [2024-11-26 17:39:49.261251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:48.638 "name": "raid_bdev1", 00:42:48.638 "uuid": "2519e067-4050-40a1-a558-44d98d86b60f", 00:42:48.638 "strip_size_kb": 0, 00:42:48.638 "state": "online", 00:42:48.638 "raid_level": "raid1", 00:42:48.638 "superblock": true, 00:42:48.638 "num_base_bdevs": 2, 00:42:48.638 "num_base_bdevs_discovered": 1, 00:42:48.638 "num_base_bdevs_operational": 1, 00:42:48.638 "base_bdevs_list": [ 00:42:48.638 { 00:42:48.638 "name": null, 00:42:48.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:48.638 "is_configured": false, 00:42:48.638 "data_offset": 256, 00:42:48.638 "data_size": 7936 00:42:48.638 }, 00:42:48.638 { 00:42:48.638 "name": "pt2", 00:42:48.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:48.638 "is_configured": true, 00:42:48.638 "data_offset": 256, 00:42:48.638 "data_size": 7936 00:42:48.638 } 00:42:48.638 ] 00:42:48.638 }' 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:48.638 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:49.205 [2024-11-26 17:39:49.740481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:49.205 [2024-11-26 17:39:49.740645] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:49.205 [2024-11-26 17:39:49.740782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:49.205 [2024-11-26 17:39:49.740885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:49.205 [2024-11-26 17:39:49.740939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:49.205 [2024-11-26 17:39:49.800391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:42:49.205 [2024-11-26 17:39:49.800480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:49.205 [2024-11-26 17:39:49.800529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:42:49.205 [2024-11-26 17:39:49.800542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:49.205 [2024-11-26 17:39:49.803528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:49.205 [2024-11-26 17:39:49.803566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:42:49.205 pt1 00:42:49.205 [2024-11-26 17:39:49.803689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:42:49.205 [2024-11-26 17:39:49.803757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:42:49.205 [2024-11-26 17:39:49.803963] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:42:49.205 [2024-11-26 17:39:49.803979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:49.205 [2024-11-26 17:39:49.803999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:42:49.205 [2024-11-26 17:39:49.804079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:42:49.205 [2024-11-26 17:39:49.804174] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:42:49.205 [2024-11-26 17:39:49.804185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:49.205 [2024-11-26 17:39:49.804529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:42:49.205 [2024-11-26 17:39:49.804730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:42:49.205 [2024-11-26 17:39:49.804748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:42:49.205 [2024-11-26 17:39:49.804986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.205 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:49.205 "name": "raid_bdev1", 00:42:49.205 "uuid": "2519e067-4050-40a1-a558-44d98d86b60f", 00:42:49.205 "strip_size_kb": 0, 00:42:49.205 "state": "online", 00:42:49.205 "raid_level": "raid1", 00:42:49.205 "superblock": true, 00:42:49.205 "num_base_bdevs": 2, 00:42:49.205 "num_base_bdevs_discovered": 1, 00:42:49.205 "num_base_bdevs_operational": 1, 00:42:49.205 "base_bdevs_list": [ 00:42:49.205 { 00:42:49.206 "name": null, 00:42:49.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:49.206 "is_configured": false, 00:42:49.206 "data_offset": 256, 00:42:49.206 "data_size": 7936 00:42:49.206 }, 00:42:49.206 { 00:42:49.206 "name": "pt2", 00:42:49.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:42:49.206 "is_configured": true, 00:42:49.206 "data_offset": 256, 00:42:49.206 "data_size": 7936 00:42:49.206 } 00:42:49.206 ] 00:42:49.206 }' 00:42:49.206 17:39:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:49.206 17:39:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:49.772 [2024-11-26 17:39:50.332582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 2519e067-4050-40a1-a558-44d98d86b60f '!=' 2519e067-4050-40a1-a558-44d98d86b60f ']' 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86501 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86501 ']' 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86501 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86501 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:49.772 killing process with pid 86501 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86501' 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86501 00:42:49.772 [2024-11-26 17:39:50.414273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:49.772 [2024-11-26 17:39:50.414410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:49.772 17:39:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86501 00:42:49.772 [2024-11-26 17:39:50.414476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:49.772 [2024-11-26 17:39:50.414496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:42:50.031 [2024-11-26 17:39:50.697643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:51.932 17:39:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:42:51.932 00:42:51.932 real 0m6.922s 00:42:51.932 user 0m10.162s 00:42:51.932 sys 0m1.351s 00:42:51.932 17:39:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:51.932 17:39:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:42:51.932 ************************************ 00:42:51.932 END TEST raid_superblock_test_4k 00:42:51.932 ************************************ 00:42:51.932 17:39:52 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:42:51.932 17:39:52 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:42:51.932 17:39:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:42:51.932 17:39:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:51.932 17:39:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:42:51.932 ************************************ 00:42:51.932 START TEST raid_rebuild_test_sb_4k 00:42:51.932 ************************************ 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:42:51.932 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86831 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86831 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86831 ']' 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:51.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:51.933 17:39:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:51.933 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:51.933 Zero copy mechanism will not be used. 00:42:51.933 [2024-11-26 17:39:52.372877] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:42:51.933 [2024-11-26 17:39:52.373004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86831 ] 00:42:51.933 [2024-11-26 17:39:52.549756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:52.190 [2024-11-26 17:39:52.716735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.480 [2024-11-26 17:39:53.009545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:52.480 [2024-11-26 17:39:53.009758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:52.739 BaseBdev1_malloc 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:52.739 [2024-11-26 17:39:53.356270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:52.739 [2024-11-26 17:39:53.356467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:52.739 [2024-11-26 17:39:53.356547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:42:52.739 [2024-11-26 17:39:53.356601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:52.739 [2024-11-26 17:39:53.359558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:52.739 [2024-11-26 17:39:53.359677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:52.739 BaseBdev1 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:52.739 BaseBdev2_malloc 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:52.739 [2024-11-26 17:39:53.426278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:42:52.739 [2024-11-26 17:39:53.426372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:52.739 [2024-11-26 17:39:53.426401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:42:52.739 [2024-11-26 17:39:53.426416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:52.739 [2024-11-26 17:39:53.429358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:52.739 [2024-11-26 17:39:53.429499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:42:52.739 BaseBdev2 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:42:52.739 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.997 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:52.997 spare_malloc 00:42:52.997 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.997 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:42:52.997 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:52.998 spare_delay 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:52.998 [2024-11-26 17:39:53.521216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:52.998 [2024-11-26 17:39:53.521391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:52.998 [2024-11-26 17:39:53.521448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:42:52.998 [2024-11-26 17:39:53.521497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:52.998 [2024-11-26 17:39:53.524445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:52.998 [2024-11-26 17:39:53.524564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:52.998 spare 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:52.998 [2024-11-26 17:39:53.533270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:52.998 [2024-11-26 17:39:53.535660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:52.998 [2024-11-26 17:39:53.535932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:42:52.998 [2024-11-26 17:39:53.535956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:42:52.998 [2024-11-26 17:39:53.536254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:42:52.998 [2024-11-26 17:39:53.536472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:42:52.998 [2024-11-26 17:39:53.536483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:42:52.998 [2024-11-26 17:39:53.536712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:52.998 "name": "raid_bdev1", 00:42:52.998 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:42:52.998 "strip_size_kb": 0, 00:42:52.998 "state": "online", 00:42:52.998 "raid_level": "raid1", 00:42:52.998 "superblock": true, 00:42:52.998 "num_base_bdevs": 2, 00:42:52.998 "num_base_bdevs_discovered": 2, 00:42:52.998 "num_base_bdevs_operational": 2, 00:42:52.998 "base_bdevs_list": [ 00:42:52.998 { 00:42:52.998 "name": "BaseBdev1", 00:42:52.998 "uuid": "eaefe486-93d4-5ac4-b164-59271190bd51", 00:42:52.998 "is_configured": true, 00:42:52.998 "data_offset": 256, 00:42:52.998 "data_size": 7936 00:42:52.998 }, 00:42:52.998 { 00:42:52.998 "name": "BaseBdev2", 00:42:52.998 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:42:52.998 "is_configured": true, 00:42:52.998 "data_offset": 256, 00:42:52.998 "data_size": 7936 00:42:52.998 } 00:42:52.998 ] 00:42:52.998 }' 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:52.998 17:39:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:53.564 [2024-11-26 17:39:54.013089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:53.564 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:42:53.823 [2024-11-26 17:39:54.340311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:42:53.823 /dev/nbd0 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:53.823 1+0 records in 00:42:53.823 1+0 records out 00:42:53.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631538 s, 6.5 MB/s 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:42:53.823 17:39:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:42:54.757 7936+0 records in 00:42:54.757 7936+0 records out 00:42:54.757 32505856 bytes (33 MB, 31 MiB) copied, 0.788187 s, 41.2 MB/s 00:42:54.757 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:42:54.757 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:42:54.757 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:54.757 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:54.757 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:42:54.757 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:54.757 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:42:55.016 [2024-11-26 17:39:55.470999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:55.016 [2024-11-26 17:39:55.495764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:55.016 "name": "raid_bdev1", 00:42:55.016 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:42:55.016 "strip_size_kb": 0, 00:42:55.016 "state": "online", 00:42:55.016 "raid_level": "raid1", 00:42:55.016 "superblock": true, 00:42:55.016 "num_base_bdevs": 2, 00:42:55.016 "num_base_bdevs_discovered": 1, 00:42:55.016 "num_base_bdevs_operational": 1, 00:42:55.016 "base_bdevs_list": [ 00:42:55.016 { 00:42:55.016 "name": null, 00:42:55.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:55.016 "is_configured": false, 00:42:55.016 "data_offset": 0, 00:42:55.016 "data_size": 7936 00:42:55.016 }, 00:42:55.016 { 00:42:55.016 "name": "BaseBdev2", 00:42:55.016 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:42:55.016 "is_configured": true, 00:42:55.016 "data_offset": 256, 00:42:55.016 "data_size": 7936 00:42:55.016 } 00:42:55.016 ] 00:42:55.016 }' 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:55.016 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:55.275 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:55.275 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.275 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:55.275 [2024-11-26 17:39:55.962999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:55.533 [2024-11-26 17:39:55.985236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:42:55.533 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.533 17:39:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:42:55.533 [2024-11-26 17:39:55.987782] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:56.469 17:39:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:56.469 17:39:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:56.469 17:39:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:56.469 17:39:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:56.469 17:39:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:56.469 17:39:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:56.469 17:39:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:56.469 17:39:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.469 17:39:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:56.469 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.469 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:56.469 "name": "raid_bdev1", 00:42:56.469 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:42:56.469 "strip_size_kb": 0, 00:42:56.469 "state": "online", 00:42:56.469 "raid_level": "raid1", 00:42:56.469 "superblock": true, 00:42:56.469 "num_base_bdevs": 2, 00:42:56.469 "num_base_bdevs_discovered": 2, 00:42:56.469 "num_base_bdevs_operational": 2, 00:42:56.469 "process": { 00:42:56.469 "type": "rebuild", 00:42:56.469 "target": "spare", 00:42:56.469 "progress": { 00:42:56.469 "blocks": 2560, 00:42:56.469 "percent": 32 00:42:56.469 } 00:42:56.469 }, 00:42:56.469 "base_bdevs_list": [ 00:42:56.469 { 00:42:56.469 "name": "spare", 00:42:56.469 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:42:56.469 "is_configured": true, 00:42:56.469 "data_offset": 256, 00:42:56.469 "data_size": 7936 00:42:56.469 }, 00:42:56.469 { 00:42:56.469 "name": "BaseBdev2", 00:42:56.469 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:42:56.469 "is_configured": true, 00:42:56.469 "data_offset": 256, 00:42:56.469 "data_size": 7936 00:42:56.469 } 00:42:56.469 ] 00:42:56.469 }' 00:42:56.469 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:56.469 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:56.469 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:56.469 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:56.469 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:42:56.469 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.469 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:56.469 [2024-11-26 17:39:57.147565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:56.728 [2024-11-26 17:39:57.198364] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:56.728 [2024-11-26 17:39:57.198568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:56.728 [2024-11-26 17:39:57.198592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:56.728 [2024-11-26 17:39:57.198605] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:56.728 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:42:56.728 "name": "raid_bdev1", 00:42:56.728 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:42:56.728 "strip_size_kb": 0, 00:42:56.728 "state": "online", 00:42:56.728 "raid_level": "raid1", 00:42:56.728 "superblock": true, 00:42:56.728 "num_base_bdevs": 2, 00:42:56.728 "num_base_bdevs_discovered": 1, 00:42:56.728 "num_base_bdevs_operational": 1, 00:42:56.728 "base_bdevs_list": [ 00:42:56.728 { 00:42:56.728 "name": null, 00:42:56.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:56.728 "is_configured": false, 00:42:56.728 "data_offset": 0, 00:42:56.728 "data_size": 7936 00:42:56.728 }, 00:42:56.728 { 00:42:56.728 "name": "BaseBdev2", 00:42:56.729 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:42:56.729 "is_configured": true, 00:42:56.729 "data_offset": 256, 00:42:56.729 "data_size": 7936 00:42:56.729 } 00:42:56.729 ] 00:42:56.729 }' 00:42:56.729 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:42:56.729 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:57.296 "name": "raid_bdev1", 00:42:57.296 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:42:57.296 "strip_size_kb": 0, 00:42:57.296 "state": "online", 00:42:57.296 "raid_level": "raid1", 00:42:57.296 "superblock": true, 00:42:57.296 "num_base_bdevs": 2, 00:42:57.296 "num_base_bdevs_discovered": 1, 00:42:57.296 "num_base_bdevs_operational": 1, 00:42:57.296 "base_bdevs_list": [ 00:42:57.296 { 00:42:57.296 "name": null, 00:42:57.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:57.296 "is_configured": false, 00:42:57.296 "data_offset": 0, 00:42:57.296 "data_size": 7936 00:42:57.296 }, 00:42:57.296 { 00:42:57.296 "name": "BaseBdev2", 00:42:57.296 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:42:57.296 "is_configured": true, 00:42:57.296 "data_offset": 256, 00:42:57.296 "data_size": 7936 00:42:57.296 } 00:42:57.296 ] 00:42:57.296 }' 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:57.296 [2024-11-26 17:39:57.843574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:57.296 [2024-11-26 17:39:57.865276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:57.296 17:39:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:42:57.296 [2024-11-26 17:39:57.867915] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:58.232 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:58.232 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:58.232 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:58.232 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:58.232 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:58.232 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:58.232 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.232 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:58.232 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:58.232 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.490 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:58.490 "name": "raid_bdev1", 00:42:58.490 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:42:58.490 "strip_size_kb": 0, 00:42:58.490 "state": "online", 00:42:58.490 "raid_level": "raid1", 00:42:58.490 "superblock": true, 00:42:58.490 "num_base_bdevs": 2, 00:42:58.490 "num_base_bdevs_discovered": 2, 00:42:58.490 "num_base_bdevs_operational": 2, 00:42:58.490 "process": { 00:42:58.490 "type": "rebuild", 00:42:58.490 "target": "spare", 00:42:58.490 "progress": { 00:42:58.490 "blocks": 2560, 00:42:58.490 "percent": 32 00:42:58.490 } 00:42:58.490 }, 00:42:58.490 "base_bdevs_list": [ 00:42:58.490 { 00:42:58.490 "name": "spare", 00:42:58.490 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:42:58.490 "is_configured": true, 00:42:58.490 "data_offset": 256, 00:42:58.490 "data_size": 7936 00:42:58.490 }, 00:42:58.490 { 00:42:58.490 "name": "BaseBdev2", 00:42:58.490 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:42:58.490 "is_configured": true, 00:42:58.490 "data_offset": 256, 00:42:58.490 "data_size": 7936 00:42:58.490 } 00:42:58.490 ] 00:42:58.490 }' 00:42:58.490 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:58.490 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:58.490 17:39:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:42:58.490 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=694 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:58.490 "name": "raid_bdev1", 00:42:58.490 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:42:58.490 "strip_size_kb": 0, 00:42:58.490 "state": "online", 00:42:58.490 "raid_level": "raid1", 00:42:58.490 "superblock": true, 00:42:58.490 "num_base_bdevs": 2, 00:42:58.490 "num_base_bdevs_discovered": 2, 00:42:58.490 "num_base_bdevs_operational": 2, 00:42:58.490 "process": { 00:42:58.490 "type": "rebuild", 00:42:58.490 "target": "spare", 00:42:58.490 "progress": { 00:42:58.490 "blocks": 2816, 00:42:58.490 "percent": 35 00:42:58.490 } 00:42:58.490 }, 00:42:58.490 "base_bdevs_list": [ 00:42:58.490 { 00:42:58.490 "name": "spare", 00:42:58.490 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:42:58.490 "is_configured": true, 00:42:58.490 "data_offset": 256, 00:42:58.490 "data_size": 7936 00:42:58.490 }, 00:42:58.490 { 00:42:58.490 "name": "BaseBdev2", 00:42:58.490 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:42:58.490 "is_configured": true, 00:42:58.490 "data_offset": 256, 00:42:58.490 "data_size": 7936 00:42:58.490 } 00:42:58.490 ] 00:42:58.490 }' 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:58.490 17:39:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:42:59.863 "name": "raid_bdev1", 00:42:59.863 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:42:59.863 "strip_size_kb": 0, 00:42:59.863 "state": "online", 00:42:59.863 "raid_level": "raid1", 00:42:59.863 "superblock": true, 00:42:59.863 "num_base_bdevs": 2, 00:42:59.863 "num_base_bdevs_discovered": 2, 00:42:59.863 "num_base_bdevs_operational": 2, 00:42:59.863 "process": { 00:42:59.863 "type": "rebuild", 00:42:59.863 "target": "spare", 00:42:59.863 "progress": { 00:42:59.863 "blocks": 5632, 00:42:59.863 "percent": 70 00:42:59.863 } 00:42:59.863 }, 00:42:59.863 "base_bdevs_list": [ 00:42:59.863 { 00:42:59.863 "name": "spare", 00:42:59.863 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:42:59.863 "is_configured": true, 00:42:59.863 "data_offset": 256, 00:42:59.863 "data_size": 7936 00:42:59.863 }, 00:42:59.863 { 00:42:59.863 "name": "BaseBdev2", 00:42:59.863 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:42:59.863 "is_configured": true, 00:42:59.863 "data_offset": 256, 00:42:59.863 "data_size": 7936 00:42:59.863 } 00:42:59.863 ] 00:42:59.863 }' 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:42:59.863 17:40:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:43:00.430 [2024-11-26 17:40:00.995007] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:43:00.430 [2024-11-26 17:40:00.995259] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:43:00.430 [2024-11-26 17:40:00.995492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.688 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:00.688 "name": "raid_bdev1", 00:43:00.688 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:00.688 "strip_size_kb": 0, 00:43:00.688 "state": "online", 00:43:00.688 "raid_level": "raid1", 00:43:00.688 "superblock": true, 00:43:00.688 "num_base_bdevs": 2, 00:43:00.688 "num_base_bdevs_discovered": 2, 00:43:00.688 "num_base_bdevs_operational": 2, 00:43:00.688 "base_bdevs_list": [ 00:43:00.688 { 00:43:00.688 "name": "spare", 00:43:00.688 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:43:00.688 "is_configured": true, 00:43:00.688 "data_offset": 256, 00:43:00.688 "data_size": 7936 00:43:00.688 }, 00:43:00.688 { 00:43:00.688 "name": "BaseBdev2", 00:43:00.688 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:00.688 "is_configured": true, 00:43:00.688 "data_offset": 256, 00:43:00.688 "data_size": 7936 00:43:00.688 } 00:43:00.688 ] 00:43:00.688 }' 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:00.948 "name": "raid_bdev1", 00:43:00.948 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:00.948 "strip_size_kb": 0, 00:43:00.948 "state": "online", 00:43:00.948 "raid_level": "raid1", 00:43:00.948 "superblock": true, 00:43:00.948 "num_base_bdevs": 2, 00:43:00.948 "num_base_bdevs_discovered": 2, 00:43:00.948 "num_base_bdevs_operational": 2, 00:43:00.948 "base_bdevs_list": [ 00:43:00.948 { 00:43:00.948 "name": "spare", 00:43:00.948 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:43:00.948 "is_configured": true, 00:43:00.948 "data_offset": 256, 00:43:00.948 "data_size": 7936 00:43:00.948 }, 00:43:00.948 { 00:43:00.948 "name": "BaseBdev2", 00:43:00.948 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:00.948 "is_configured": true, 00:43:00.948 "data_offset": 256, 00:43:00.948 "data_size": 7936 00:43:00.948 } 00:43:00.948 ] 00:43:00.948 }' 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:00.948 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:01.207 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:01.207 "name": "raid_bdev1", 00:43:01.207 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:01.207 "strip_size_kb": 0, 00:43:01.207 "state": "online", 00:43:01.207 "raid_level": "raid1", 00:43:01.207 "superblock": true, 00:43:01.207 "num_base_bdevs": 2, 00:43:01.207 "num_base_bdevs_discovered": 2, 00:43:01.207 "num_base_bdevs_operational": 2, 00:43:01.207 "base_bdevs_list": [ 00:43:01.207 { 00:43:01.207 "name": "spare", 00:43:01.207 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:43:01.207 "is_configured": true, 00:43:01.207 "data_offset": 256, 00:43:01.207 "data_size": 7936 00:43:01.207 }, 00:43:01.207 { 00:43:01.207 "name": "BaseBdev2", 00:43:01.207 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:01.207 "is_configured": true, 00:43:01.207 "data_offset": 256, 00:43:01.207 "data_size": 7936 00:43:01.207 } 00:43:01.207 ] 00:43:01.207 }' 00:43:01.207 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:01.207 17:40:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:01.466 [2024-11-26 17:40:02.106583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:01.466 [2024-11-26 17:40:02.106740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:01.466 [2024-11-26 17:40:02.106864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:01.466 [2024-11-26 17:40:02.106958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:01.466 [2024-11-26 17:40:02.106975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:01.466 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:01.467 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:43:01.725 /dev/nbd0 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:01.983 1+0 records in 00:43:01.983 1+0 records out 00:43:01.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456834 s, 9.0 MB/s 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:01.983 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:43:02.241 /dev/nbd1 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:02.241 1+0 records in 00:43:02.241 1+0 records out 00:43:02.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272121 s, 15.1 MB/s 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:02.241 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:43:02.499 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:43:02.499 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:43:02.499 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:43:02.499 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:02.499 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:43:02.499 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:02.499 17:40:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:43:02.499 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:02.499 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.758 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:03.018 [2024-11-26 17:40:03.464570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:03.018 [2024-11-26 17:40:03.464648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:03.018 [2024-11-26 17:40:03.464694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:43:03.018 [2024-11-26 17:40:03.464713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:03.018 [2024-11-26 17:40:03.467903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:03.018 [2024-11-26 17:40:03.467946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:03.018 [2024-11-26 17:40:03.468061] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:43:03.018 [2024-11-26 17:40:03.468125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:03.018 [2024-11-26 17:40:03.468316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:03.018 spare 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:03.018 [2024-11-26 17:40:03.568262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:43:03.018 [2024-11-26 17:40:03.568361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:43:03.018 [2024-11-26 17:40:03.568820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:43:03.018 [2024-11-26 17:40:03.569103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:43:03.018 [2024-11-26 17:40:03.569124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:43:03.018 [2024-11-26 17:40:03.569378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:03.018 "name": "raid_bdev1", 00:43:03.018 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:03.018 "strip_size_kb": 0, 00:43:03.018 "state": "online", 00:43:03.018 "raid_level": "raid1", 00:43:03.018 "superblock": true, 00:43:03.018 "num_base_bdevs": 2, 00:43:03.018 "num_base_bdevs_discovered": 2, 00:43:03.018 "num_base_bdevs_operational": 2, 00:43:03.018 "base_bdevs_list": [ 00:43:03.018 { 00:43:03.018 "name": "spare", 00:43:03.018 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:43:03.018 "is_configured": true, 00:43:03.018 "data_offset": 256, 00:43:03.018 "data_size": 7936 00:43:03.018 }, 00:43:03.018 { 00:43:03.018 "name": "BaseBdev2", 00:43:03.018 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:03.018 "is_configured": true, 00:43:03.018 "data_offset": 256, 00:43:03.018 "data_size": 7936 00:43:03.018 } 00:43:03.018 ] 00:43:03.018 }' 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:03.018 17:40:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:03.585 "name": "raid_bdev1", 00:43:03.585 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:03.585 "strip_size_kb": 0, 00:43:03.585 "state": "online", 00:43:03.585 "raid_level": "raid1", 00:43:03.585 "superblock": true, 00:43:03.585 "num_base_bdevs": 2, 00:43:03.585 "num_base_bdevs_discovered": 2, 00:43:03.585 "num_base_bdevs_operational": 2, 00:43:03.585 "base_bdevs_list": [ 00:43:03.585 { 00:43:03.585 "name": "spare", 00:43:03.585 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:43:03.585 "is_configured": true, 00:43:03.585 "data_offset": 256, 00:43:03.585 "data_size": 7936 00:43:03.585 }, 00:43:03.585 { 00:43:03.585 "name": "BaseBdev2", 00:43:03.585 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:03.585 "is_configured": true, 00:43:03.585 "data_offset": 256, 00:43:03.585 "data_size": 7936 00:43:03.585 } 00:43:03.585 ] 00:43:03.585 }' 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:03.585 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:03.586 [2024-11-26 17:40:04.252454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:03.586 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.845 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:03.845 "name": "raid_bdev1", 00:43:03.845 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:03.845 "strip_size_kb": 0, 00:43:03.845 "state": "online", 00:43:03.845 "raid_level": "raid1", 00:43:03.845 "superblock": true, 00:43:03.845 "num_base_bdevs": 2, 00:43:03.845 "num_base_bdevs_discovered": 1, 00:43:03.845 "num_base_bdevs_operational": 1, 00:43:03.845 "base_bdevs_list": [ 00:43:03.845 { 00:43:03.845 "name": null, 00:43:03.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:03.845 "is_configured": false, 00:43:03.845 "data_offset": 0, 00:43:03.845 "data_size": 7936 00:43:03.845 }, 00:43:03.845 { 00:43:03.845 "name": "BaseBdev2", 00:43:03.845 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:03.845 "is_configured": true, 00:43:03.845 "data_offset": 256, 00:43:03.845 "data_size": 7936 00:43:03.845 } 00:43:03.845 ] 00:43:03.845 }' 00:43:03.845 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:03.845 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:04.104 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:04.104 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.104 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:04.104 [2024-11-26 17:40:04.731769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:04.104 [2024-11-26 17:40:04.732067] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:43:04.104 [2024-11-26 17:40:04.732099] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:43:04.104 [2024-11-26 17:40:04.732142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:04.104 [2024-11-26 17:40:04.753861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:43:04.104 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.104 17:40:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:43:04.104 [2024-11-26 17:40:04.756594] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:05.482 "name": "raid_bdev1", 00:43:05.482 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:05.482 "strip_size_kb": 0, 00:43:05.482 "state": "online", 00:43:05.482 "raid_level": "raid1", 00:43:05.482 "superblock": true, 00:43:05.482 "num_base_bdevs": 2, 00:43:05.482 "num_base_bdevs_discovered": 2, 00:43:05.482 "num_base_bdevs_operational": 2, 00:43:05.482 "process": { 00:43:05.482 "type": "rebuild", 00:43:05.482 "target": "spare", 00:43:05.482 "progress": { 00:43:05.482 "blocks": 2560, 00:43:05.482 "percent": 32 00:43:05.482 } 00:43:05.482 }, 00:43:05.482 "base_bdevs_list": [ 00:43:05.482 { 00:43:05.482 "name": "spare", 00:43:05.482 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:43:05.482 "is_configured": true, 00:43:05.482 "data_offset": 256, 00:43:05.482 "data_size": 7936 00:43:05.482 }, 00:43:05.482 { 00:43:05.482 "name": "BaseBdev2", 00:43:05.482 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:05.482 "is_configured": true, 00:43:05.482 "data_offset": 256, 00:43:05.482 "data_size": 7936 00:43:05.482 } 00:43:05.482 ] 00:43:05.482 }' 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.482 17:40:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:05.482 [2024-11-26 17:40:05.888597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:05.482 [2024-11-26 17:40:05.967145] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:05.482 [2024-11-26 17:40:05.967224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:05.482 [2024-11-26 17:40:05.967241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:05.482 [2024-11-26 17:40:05.967252] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:05.482 "name": "raid_bdev1", 00:43:05.482 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:05.482 "strip_size_kb": 0, 00:43:05.482 "state": "online", 00:43:05.482 "raid_level": "raid1", 00:43:05.482 "superblock": true, 00:43:05.482 "num_base_bdevs": 2, 00:43:05.482 "num_base_bdevs_discovered": 1, 00:43:05.482 "num_base_bdevs_operational": 1, 00:43:05.482 "base_bdevs_list": [ 00:43:05.482 { 00:43:05.482 "name": null, 00:43:05.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:05.482 "is_configured": false, 00:43:05.482 "data_offset": 0, 00:43:05.482 "data_size": 7936 00:43:05.482 }, 00:43:05.482 { 00:43:05.482 "name": "BaseBdev2", 00:43:05.482 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:05.482 "is_configured": true, 00:43:05.482 "data_offset": 256, 00:43:05.482 "data_size": 7936 00:43:05.482 } 00:43:05.482 ] 00:43:05.482 }' 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:05.482 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:06.049 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:06.049 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.049 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:06.049 [2024-11-26 17:40:06.466310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:06.049 [2024-11-26 17:40:06.466406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:06.049 [2024-11-26 17:40:06.466434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:43:06.049 [2024-11-26 17:40:06.466448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:06.049 [2024-11-26 17:40:06.467062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:06.049 [2024-11-26 17:40:06.467097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:06.049 [2024-11-26 17:40:06.467216] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:43:06.049 [2024-11-26 17:40:06.467239] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:43:06.049 [2024-11-26 17:40:06.467251] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:43:06.049 [2024-11-26 17:40:06.467286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:06.049 [2024-11-26 17:40:06.484962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:43:06.049 spare 00:43:06.049 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.049 17:40:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:43:06.049 [2024-11-26 17:40:06.487151] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:06.987 "name": "raid_bdev1", 00:43:06.987 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:06.987 "strip_size_kb": 0, 00:43:06.987 "state": "online", 00:43:06.987 "raid_level": "raid1", 00:43:06.987 "superblock": true, 00:43:06.987 "num_base_bdevs": 2, 00:43:06.987 "num_base_bdevs_discovered": 2, 00:43:06.987 "num_base_bdevs_operational": 2, 00:43:06.987 "process": { 00:43:06.987 "type": "rebuild", 00:43:06.987 "target": "spare", 00:43:06.987 "progress": { 00:43:06.987 "blocks": 2560, 00:43:06.987 "percent": 32 00:43:06.987 } 00:43:06.987 }, 00:43:06.987 "base_bdevs_list": [ 00:43:06.987 { 00:43:06.987 "name": "spare", 00:43:06.987 "uuid": "8f8e7eb3-60a3-5020-85f2-7e69e186b686", 00:43:06.987 "is_configured": true, 00:43:06.987 "data_offset": 256, 00:43:06.987 "data_size": 7936 00:43:06.987 }, 00:43:06.987 { 00:43:06.987 "name": "BaseBdev2", 00:43:06.987 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:06.987 "is_configured": true, 00:43:06.987 "data_offset": 256, 00:43:06.987 "data_size": 7936 00:43:06.987 } 00:43:06.987 ] 00:43:06.987 }' 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.987 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:06.987 [2024-11-26 17:40:07.645742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:07.246 [2024-11-26 17:40:07.697521] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:07.246 [2024-11-26 17:40:07.697631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:07.246 [2024-11-26 17:40:07.697651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:07.246 [2024-11-26 17:40:07.697660] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:07.246 "name": "raid_bdev1", 00:43:07.246 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:07.246 "strip_size_kb": 0, 00:43:07.246 "state": "online", 00:43:07.246 "raid_level": "raid1", 00:43:07.246 "superblock": true, 00:43:07.246 "num_base_bdevs": 2, 00:43:07.246 "num_base_bdevs_discovered": 1, 00:43:07.246 "num_base_bdevs_operational": 1, 00:43:07.246 "base_bdevs_list": [ 00:43:07.246 { 00:43:07.246 "name": null, 00:43:07.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:07.246 "is_configured": false, 00:43:07.246 "data_offset": 0, 00:43:07.246 "data_size": 7936 00:43:07.246 }, 00:43:07.246 { 00:43:07.246 "name": "BaseBdev2", 00:43:07.246 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:07.246 "is_configured": true, 00:43:07.246 "data_offset": 256, 00:43:07.246 "data_size": 7936 00:43:07.246 } 00:43:07.246 ] 00:43:07.246 }' 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:07.246 17:40:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.816 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:07.816 "name": "raid_bdev1", 00:43:07.816 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:07.816 "strip_size_kb": 0, 00:43:07.816 "state": "online", 00:43:07.816 "raid_level": "raid1", 00:43:07.816 "superblock": true, 00:43:07.816 "num_base_bdevs": 2, 00:43:07.816 "num_base_bdevs_discovered": 1, 00:43:07.816 "num_base_bdevs_operational": 1, 00:43:07.816 "base_bdevs_list": [ 00:43:07.816 { 00:43:07.816 "name": null, 00:43:07.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:07.816 "is_configured": false, 00:43:07.816 "data_offset": 0, 00:43:07.816 "data_size": 7936 00:43:07.816 }, 00:43:07.817 { 00:43:07.817 "name": "BaseBdev2", 00:43:07.817 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:07.817 "is_configured": true, 00:43:07.817 "data_offset": 256, 00:43:07.817 "data_size": 7936 00:43:07.817 } 00:43:07.817 ] 00:43:07.817 }' 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:07.817 [2024-11-26 17:40:08.372659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:43:07.817 [2024-11-26 17:40:08.372756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:07.817 [2024-11-26 17:40:08.372795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:43:07.817 [2024-11-26 17:40:08.372820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:07.817 [2024-11-26 17:40:08.373390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:07.817 [2024-11-26 17:40:08.373422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:43:07.817 [2024-11-26 17:40:08.373542] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:43:07.817 [2024-11-26 17:40:08.373564] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:43:07.817 [2024-11-26 17:40:08.373580] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:43:07.817 [2024-11-26 17:40:08.373594] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:43:07.817 BaseBdev1 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.817 17:40:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:08.754 "name": "raid_bdev1", 00:43:08.754 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:08.754 "strip_size_kb": 0, 00:43:08.754 "state": "online", 00:43:08.754 "raid_level": "raid1", 00:43:08.754 "superblock": true, 00:43:08.754 "num_base_bdevs": 2, 00:43:08.754 "num_base_bdevs_discovered": 1, 00:43:08.754 "num_base_bdevs_operational": 1, 00:43:08.754 "base_bdevs_list": [ 00:43:08.754 { 00:43:08.754 "name": null, 00:43:08.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:08.754 "is_configured": false, 00:43:08.754 "data_offset": 0, 00:43:08.754 "data_size": 7936 00:43:08.754 }, 00:43:08.754 { 00:43:08.754 "name": "BaseBdev2", 00:43:08.754 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:08.754 "is_configured": true, 00:43:08.754 "data_offset": 256, 00:43:08.754 "data_size": 7936 00:43:08.754 } 00:43:08.754 ] 00:43:08.754 }' 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:08.754 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:09.323 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:09.323 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:09.323 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:09.323 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:09.323 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:09.323 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:09.323 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.323 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:09.323 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:09.324 "name": "raid_bdev1", 00:43:09.324 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:09.324 "strip_size_kb": 0, 00:43:09.324 "state": "online", 00:43:09.324 "raid_level": "raid1", 00:43:09.324 "superblock": true, 00:43:09.324 "num_base_bdevs": 2, 00:43:09.324 "num_base_bdevs_discovered": 1, 00:43:09.324 "num_base_bdevs_operational": 1, 00:43:09.324 "base_bdevs_list": [ 00:43:09.324 { 00:43:09.324 "name": null, 00:43:09.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:09.324 "is_configured": false, 00:43:09.324 "data_offset": 0, 00:43:09.324 "data_size": 7936 00:43:09.324 }, 00:43:09.324 { 00:43:09.324 "name": "BaseBdev2", 00:43:09.324 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:09.324 "is_configured": true, 00:43:09.324 "data_offset": 256, 00:43:09.324 "data_size": 7936 00:43:09.324 } 00:43:09.324 ] 00:43:09.324 }' 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:09.324 [2024-11-26 17:40:09.978035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:09.324 [2024-11-26 17:40:09.978300] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:43:09.324 [2024-11-26 17:40:09.978331] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:43:09.324 request: 00:43:09.324 { 00:43:09.324 "base_bdev": "BaseBdev1", 00:43:09.324 "raid_bdev": "raid_bdev1", 00:43:09.324 "method": "bdev_raid_add_base_bdev", 00:43:09.324 "req_id": 1 00:43:09.324 } 00:43:09.324 Got JSON-RPC error response 00:43:09.324 response: 00:43:09.324 { 00:43:09.324 "code": -22, 00:43:09.324 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:43:09.324 } 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:09.324 17:40:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:10.701 17:40:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:10.701 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.701 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:10.701 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.701 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:10.701 "name": "raid_bdev1", 00:43:10.701 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:10.701 "strip_size_kb": 0, 00:43:10.701 "state": "online", 00:43:10.701 "raid_level": "raid1", 00:43:10.701 "superblock": true, 00:43:10.701 "num_base_bdevs": 2, 00:43:10.701 "num_base_bdevs_discovered": 1, 00:43:10.701 "num_base_bdevs_operational": 1, 00:43:10.701 "base_bdevs_list": [ 00:43:10.701 { 00:43:10.701 "name": null, 00:43:10.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:10.701 "is_configured": false, 00:43:10.701 "data_offset": 0, 00:43:10.701 "data_size": 7936 00:43:10.701 }, 00:43:10.701 { 00:43:10.701 "name": "BaseBdev2", 00:43:10.701 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:10.701 "is_configured": true, 00:43:10.701 "data_offset": 256, 00:43:10.701 "data_size": 7936 00:43:10.701 } 00:43:10.701 ] 00:43:10.701 }' 00:43:10.701 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:10.701 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:10.959 "name": "raid_bdev1", 00:43:10.959 "uuid": "dc10549f-0400-48f1-89e8-6c9fb426e74f", 00:43:10.959 "strip_size_kb": 0, 00:43:10.959 "state": "online", 00:43:10.959 "raid_level": "raid1", 00:43:10.959 "superblock": true, 00:43:10.959 "num_base_bdevs": 2, 00:43:10.959 "num_base_bdevs_discovered": 1, 00:43:10.959 "num_base_bdevs_operational": 1, 00:43:10.959 "base_bdevs_list": [ 00:43:10.959 { 00:43:10.959 "name": null, 00:43:10.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:10.959 "is_configured": false, 00:43:10.959 "data_offset": 0, 00:43:10.959 "data_size": 7936 00:43:10.959 }, 00:43:10.959 { 00:43:10.959 "name": "BaseBdev2", 00:43:10.959 "uuid": "b8b55a7d-80e3-5910-9034-f36c1e52e05c", 00:43:10.959 "is_configured": true, 00:43:10.959 "data_offset": 256, 00:43:10.959 "data_size": 7936 00:43:10.959 } 00:43:10.959 ] 00:43:10.959 }' 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86831 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86831 ']' 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86831 00:43:10.959 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:43:10.960 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:10.960 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86831 00:43:11.218 killing process with pid 86831 00:43:11.218 Received shutdown signal, test time was about 60.000000 seconds 00:43:11.218 00:43:11.218 Latency(us) 00:43:11.218 [2024-11-26T17:40:11.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:11.218 [2024-11-26T17:40:11.913Z] =================================================================================================================== 00:43:11.218 [2024-11-26T17:40:11.913Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:11.218 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:11.218 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:11.218 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86831' 00:43:11.218 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86831 00:43:11.218 [2024-11-26 17:40:11.659343] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:11.218 17:40:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86831 00:43:11.218 [2024-11-26 17:40:11.659539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:11.218 [2024-11-26 17:40:11.659612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:11.218 [2024-11-26 17:40:11.659627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:43:11.476 [2024-11-26 17:40:12.061424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:13.379 17:40:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:43:13.379 00:43:13.379 real 0m21.282s 00:43:13.379 user 0m27.376s 00:43:13.379 sys 0m3.200s 00:43:13.379 17:40:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:13.379 17:40:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:43:13.379 ************************************ 00:43:13.379 END TEST raid_rebuild_test_sb_4k 00:43:13.379 ************************************ 00:43:13.379 17:40:13 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:43:13.379 17:40:13 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:43:13.379 17:40:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:43:13.379 17:40:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:13.379 17:40:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:13.379 ************************************ 00:43:13.379 START TEST raid_state_function_test_sb_md_separate 00:43:13.379 ************************************ 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87533 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87533' 00:43:13.379 Process raid pid: 87533 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87533 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87533 ']' 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:13.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:13.379 17:40:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:13.379 [2024-11-26 17:40:13.741615] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:13.379 [2024-11-26 17:40:13.741774] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:13.379 [2024-11-26 17:40:13.931006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:13.637 [2024-11-26 17:40:14.095257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:13.896 [2024-11-26 17:40:14.384302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:13.896 [2024-11-26 17:40:14.384349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:14.156 [2024-11-26 17:40:14.610588] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:14.156 [2024-11-26 17:40:14.610655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:14.156 [2024-11-26 17:40:14.610669] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:14.156 [2024-11-26 17:40:14.610682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:14.156 "name": "Existed_Raid", 00:43:14.156 "uuid": "359ef164-a693-468e-be8d-d3c735082600", 00:43:14.156 "strip_size_kb": 0, 00:43:14.156 "state": "configuring", 00:43:14.156 "raid_level": "raid1", 00:43:14.156 "superblock": true, 00:43:14.156 "num_base_bdevs": 2, 00:43:14.156 "num_base_bdevs_discovered": 0, 00:43:14.156 "num_base_bdevs_operational": 2, 00:43:14.156 "base_bdevs_list": [ 00:43:14.156 { 00:43:14.156 "name": "BaseBdev1", 00:43:14.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:14.156 "is_configured": false, 00:43:14.156 "data_offset": 0, 00:43:14.156 "data_size": 0 00:43:14.156 }, 00:43:14.156 { 00:43:14.156 "name": "BaseBdev2", 00:43:14.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:14.156 "is_configured": false, 00:43:14.156 "data_offset": 0, 00:43:14.156 "data_size": 0 00:43:14.156 } 00:43:14.156 ] 00:43:14.156 }' 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:14.156 17:40:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:14.416 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:43:14.416 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.416 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:14.416 [2024-11-26 17:40:15.101745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:14.416 [2024-11-26 17:40:15.101796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:43:14.416 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.416 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:43:14.416 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.416 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:14.676 [2024-11-26 17:40:15.113730] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:14.676 [2024-11-26 17:40:15.113781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:14.676 [2024-11-26 17:40:15.113793] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:14.676 [2024-11-26 17:40:15.113809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:14.676 [2024-11-26 17:40:15.178214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:14.676 BaseBdev1 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:14.676 [ 00:43:14.676 { 00:43:14.676 "name": "BaseBdev1", 00:43:14.676 "aliases": [ 00:43:14.676 "7af56346-2427-4989-9370-64ef2b8995f0" 00:43:14.676 ], 00:43:14.676 "product_name": "Malloc disk", 00:43:14.676 "block_size": 4096, 00:43:14.676 "num_blocks": 8192, 00:43:14.676 "uuid": "7af56346-2427-4989-9370-64ef2b8995f0", 00:43:14.676 "md_size": 32, 00:43:14.676 "md_interleave": false, 00:43:14.676 "dif_type": 0, 00:43:14.676 "assigned_rate_limits": { 00:43:14.676 "rw_ios_per_sec": 0, 00:43:14.676 "rw_mbytes_per_sec": 0, 00:43:14.676 "r_mbytes_per_sec": 0, 00:43:14.676 "w_mbytes_per_sec": 0 00:43:14.676 }, 00:43:14.676 "claimed": true, 00:43:14.676 "claim_type": "exclusive_write", 00:43:14.676 "zoned": false, 00:43:14.676 "supported_io_types": { 00:43:14.676 "read": true, 00:43:14.676 "write": true, 00:43:14.676 "unmap": true, 00:43:14.676 "flush": true, 00:43:14.676 "reset": true, 00:43:14.676 "nvme_admin": false, 00:43:14.676 "nvme_io": false, 00:43:14.676 "nvme_io_md": false, 00:43:14.676 "write_zeroes": true, 00:43:14.676 "zcopy": true, 00:43:14.676 "get_zone_info": false, 00:43:14.676 "zone_management": false, 00:43:14.676 "zone_append": false, 00:43:14.676 "compare": false, 00:43:14.676 "compare_and_write": false, 00:43:14.676 "abort": true, 00:43:14.676 "seek_hole": false, 00:43:14.676 "seek_data": false, 00:43:14.676 "copy": true, 00:43:14.676 "nvme_iov_md": false 00:43:14.676 }, 00:43:14.676 "memory_domains": [ 00:43:14.676 { 00:43:14.676 "dma_device_id": "system", 00:43:14.676 "dma_device_type": 1 00:43:14.676 }, 00:43:14.676 { 00:43:14.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:14.676 "dma_device_type": 2 00:43:14.676 } 00:43:14.676 ], 00:43:14.676 "driver_specific": {} 00:43:14.676 } 00:43:14.676 ] 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:14.676 "name": "Existed_Raid", 00:43:14.676 "uuid": "4b0e4014-1a91-426d-a343-21c9cbb641ae", 00:43:14.676 "strip_size_kb": 0, 00:43:14.676 "state": "configuring", 00:43:14.676 "raid_level": "raid1", 00:43:14.676 "superblock": true, 00:43:14.676 "num_base_bdevs": 2, 00:43:14.676 "num_base_bdevs_discovered": 1, 00:43:14.676 "num_base_bdevs_operational": 2, 00:43:14.676 "base_bdevs_list": [ 00:43:14.676 { 00:43:14.676 "name": "BaseBdev1", 00:43:14.676 "uuid": "7af56346-2427-4989-9370-64ef2b8995f0", 00:43:14.676 "is_configured": true, 00:43:14.676 "data_offset": 256, 00:43:14.676 "data_size": 7936 00:43:14.676 }, 00:43:14.676 { 00:43:14.676 "name": "BaseBdev2", 00:43:14.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:14.676 "is_configured": false, 00:43:14.676 "data_offset": 0, 00:43:14.676 "data_size": 0 00:43:14.676 } 00:43:14.676 ] 00:43:14.676 }' 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:14.676 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:15.244 [2024-11-26 17:40:15.697591] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:15.244 [2024-11-26 17:40:15.697733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:15.244 [2024-11-26 17:40:15.709588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:15.244 [2024-11-26 17:40:15.712196] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:15.244 [2024-11-26 17:40:15.712245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:15.244 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.245 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:15.245 "name": "Existed_Raid", 00:43:15.245 "uuid": "1c09693b-e20b-408e-9d5b-6fa6b6de058f", 00:43:15.245 "strip_size_kb": 0, 00:43:15.245 "state": "configuring", 00:43:15.245 "raid_level": "raid1", 00:43:15.245 "superblock": true, 00:43:15.245 "num_base_bdevs": 2, 00:43:15.245 "num_base_bdevs_discovered": 1, 00:43:15.245 "num_base_bdevs_operational": 2, 00:43:15.245 "base_bdevs_list": [ 00:43:15.245 { 00:43:15.245 "name": "BaseBdev1", 00:43:15.245 "uuid": "7af56346-2427-4989-9370-64ef2b8995f0", 00:43:15.245 "is_configured": true, 00:43:15.245 "data_offset": 256, 00:43:15.245 "data_size": 7936 00:43:15.245 }, 00:43:15.245 { 00:43:15.245 "name": "BaseBdev2", 00:43:15.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:15.245 "is_configured": false, 00:43:15.245 "data_offset": 0, 00:43:15.245 "data_size": 0 00:43:15.245 } 00:43:15.245 ] 00:43:15.245 }' 00:43:15.245 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:15.245 17:40:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:15.505 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:43:15.505 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.505 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:15.764 [2024-11-26 17:40:16.270217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:15.764 [2024-11-26 17:40:16.270634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:43:15.764 [2024-11-26 17:40:16.270663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:43:15.764 [2024-11-26 17:40:16.270768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:43:15.764 [2024-11-26 17:40:16.270947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:43:15.764 [2024-11-26 17:40:16.270963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:43:15.764 [2024-11-26 17:40:16.271076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:15.764 BaseBdev2 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:15.764 [ 00:43:15.764 { 00:43:15.764 "name": "BaseBdev2", 00:43:15.764 "aliases": [ 00:43:15.764 "4bcdb755-1665-4a1b-b1ad-f398c705e247" 00:43:15.764 ], 00:43:15.764 "product_name": "Malloc disk", 00:43:15.764 "block_size": 4096, 00:43:15.764 "num_blocks": 8192, 00:43:15.764 "uuid": "4bcdb755-1665-4a1b-b1ad-f398c705e247", 00:43:15.764 "md_size": 32, 00:43:15.764 "md_interleave": false, 00:43:15.764 "dif_type": 0, 00:43:15.764 "assigned_rate_limits": { 00:43:15.764 "rw_ios_per_sec": 0, 00:43:15.764 "rw_mbytes_per_sec": 0, 00:43:15.764 "r_mbytes_per_sec": 0, 00:43:15.764 "w_mbytes_per_sec": 0 00:43:15.764 }, 00:43:15.764 "claimed": true, 00:43:15.764 "claim_type": "exclusive_write", 00:43:15.764 "zoned": false, 00:43:15.764 "supported_io_types": { 00:43:15.764 "read": true, 00:43:15.764 "write": true, 00:43:15.764 "unmap": true, 00:43:15.764 "flush": true, 00:43:15.764 "reset": true, 00:43:15.764 "nvme_admin": false, 00:43:15.764 "nvme_io": false, 00:43:15.764 "nvme_io_md": false, 00:43:15.764 "write_zeroes": true, 00:43:15.764 "zcopy": true, 00:43:15.764 "get_zone_info": false, 00:43:15.764 "zone_management": false, 00:43:15.764 "zone_append": false, 00:43:15.764 "compare": false, 00:43:15.764 "compare_and_write": false, 00:43:15.764 "abort": true, 00:43:15.764 "seek_hole": false, 00:43:15.764 "seek_data": false, 00:43:15.764 "copy": true, 00:43:15.764 "nvme_iov_md": false 00:43:15.764 }, 00:43:15.764 "memory_domains": [ 00:43:15.764 { 00:43:15.764 "dma_device_id": "system", 00:43:15.764 "dma_device_type": 1 00:43:15.764 }, 00:43:15.764 { 00:43:15.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:15.764 "dma_device_type": 2 00:43:15.764 } 00:43:15.764 ], 00:43:15.764 "driver_specific": {} 00:43:15.764 } 00:43:15.764 ] 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:15.764 "name": "Existed_Raid", 00:43:15.764 "uuid": "1c09693b-e20b-408e-9d5b-6fa6b6de058f", 00:43:15.764 "strip_size_kb": 0, 00:43:15.764 "state": "online", 00:43:15.764 "raid_level": "raid1", 00:43:15.764 "superblock": true, 00:43:15.764 "num_base_bdevs": 2, 00:43:15.764 "num_base_bdevs_discovered": 2, 00:43:15.764 "num_base_bdevs_operational": 2, 00:43:15.764 "base_bdevs_list": [ 00:43:15.764 { 00:43:15.764 "name": "BaseBdev1", 00:43:15.764 "uuid": "7af56346-2427-4989-9370-64ef2b8995f0", 00:43:15.764 "is_configured": true, 00:43:15.764 "data_offset": 256, 00:43:15.764 "data_size": 7936 00:43:15.764 }, 00:43:15.764 { 00:43:15.764 "name": "BaseBdev2", 00:43:15.764 "uuid": "4bcdb755-1665-4a1b-b1ad-f398c705e247", 00:43:15.764 "is_configured": true, 00:43:15.764 "data_offset": 256, 00:43:15.764 "data_size": 7936 00:43:15.764 } 00:43:15.764 ] 00:43:15.764 }' 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:15.764 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:16.331 [2024-11-26 17:40:16.813922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.331 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:16.331 "name": "Existed_Raid", 00:43:16.331 "aliases": [ 00:43:16.331 "1c09693b-e20b-408e-9d5b-6fa6b6de058f" 00:43:16.331 ], 00:43:16.331 "product_name": "Raid Volume", 00:43:16.331 "block_size": 4096, 00:43:16.331 "num_blocks": 7936, 00:43:16.331 "uuid": "1c09693b-e20b-408e-9d5b-6fa6b6de058f", 00:43:16.331 "md_size": 32, 00:43:16.331 "md_interleave": false, 00:43:16.331 "dif_type": 0, 00:43:16.331 "assigned_rate_limits": { 00:43:16.331 "rw_ios_per_sec": 0, 00:43:16.331 "rw_mbytes_per_sec": 0, 00:43:16.331 "r_mbytes_per_sec": 0, 00:43:16.331 "w_mbytes_per_sec": 0 00:43:16.331 }, 00:43:16.331 "claimed": false, 00:43:16.331 "zoned": false, 00:43:16.331 "supported_io_types": { 00:43:16.331 "read": true, 00:43:16.331 "write": true, 00:43:16.331 "unmap": false, 00:43:16.331 "flush": false, 00:43:16.331 "reset": true, 00:43:16.331 "nvme_admin": false, 00:43:16.331 "nvme_io": false, 00:43:16.331 "nvme_io_md": false, 00:43:16.331 "write_zeroes": true, 00:43:16.331 "zcopy": false, 00:43:16.331 "get_zone_info": false, 00:43:16.331 "zone_management": false, 00:43:16.331 "zone_append": false, 00:43:16.331 "compare": false, 00:43:16.331 "compare_and_write": false, 00:43:16.331 "abort": false, 00:43:16.331 "seek_hole": false, 00:43:16.331 "seek_data": false, 00:43:16.331 "copy": false, 00:43:16.331 "nvme_iov_md": false 00:43:16.331 }, 00:43:16.331 "memory_domains": [ 00:43:16.331 { 00:43:16.331 "dma_device_id": "system", 00:43:16.331 "dma_device_type": 1 00:43:16.331 }, 00:43:16.331 { 00:43:16.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:16.331 "dma_device_type": 2 00:43:16.331 }, 00:43:16.331 { 00:43:16.331 "dma_device_id": "system", 00:43:16.331 "dma_device_type": 1 00:43:16.332 }, 00:43:16.332 { 00:43:16.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:16.332 "dma_device_type": 2 00:43:16.332 } 00:43:16.332 ], 00:43:16.332 "driver_specific": { 00:43:16.332 "raid": { 00:43:16.332 "uuid": "1c09693b-e20b-408e-9d5b-6fa6b6de058f", 00:43:16.332 "strip_size_kb": 0, 00:43:16.332 "state": "online", 00:43:16.332 "raid_level": "raid1", 00:43:16.332 "superblock": true, 00:43:16.332 "num_base_bdevs": 2, 00:43:16.332 "num_base_bdevs_discovered": 2, 00:43:16.332 "num_base_bdevs_operational": 2, 00:43:16.332 "base_bdevs_list": [ 00:43:16.332 { 00:43:16.332 "name": "BaseBdev1", 00:43:16.332 "uuid": "7af56346-2427-4989-9370-64ef2b8995f0", 00:43:16.332 "is_configured": true, 00:43:16.332 "data_offset": 256, 00:43:16.332 "data_size": 7936 00:43:16.332 }, 00:43:16.332 { 00:43:16.332 "name": "BaseBdev2", 00:43:16.332 "uuid": "4bcdb755-1665-4a1b-b1ad-f398c705e247", 00:43:16.332 "is_configured": true, 00:43:16.332 "data_offset": 256, 00:43:16.332 "data_size": 7936 00:43:16.332 } 00:43:16.332 ] 00:43:16.332 } 00:43:16.332 } 00:43:16.332 }' 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:43:16.332 BaseBdev2' 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:16.332 17:40:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.332 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:43:16.332 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:43:16.332 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:43:16.332 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.332 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:16.332 [2024-11-26 17:40:17.021207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:16.590 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.590 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:43:16.590 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:43:16.590 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:43:16.590 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:43:16.590 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:43:16.590 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:43:16.590 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:16.590 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:16.591 "name": "Existed_Raid", 00:43:16.591 "uuid": "1c09693b-e20b-408e-9d5b-6fa6b6de058f", 00:43:16.591 "strip_size_kb": 0, 00:43:16.591 "state": "online", 00:43:16.591 "raid_level": "raid1", 00:43:16.591 "superblock": true, 00:43:16.591 "num_base_bdevs": 2, 00:43:16.591 "num_base_bdevs_discovered": 1, 00:43:16.591 "num_base_bdevs_operational": 1, 00:43:16.591 "base_bdevs_list": [ 00:43:16.591 { 00:43:16.591 "name": null, 00:43:16.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:16.591 "is_configured": false, 00:43:16.591 "data_offset": 0, 00:43:16.591 "data_size": 7936 00:43:16.591 }, 00:43:16.591 { 00:43:16.591 "name": "BaseBdev2", 00:43:16.591 "uuid": "4bcdb755-1665-4a1b-b1ad-f398c705e247", 00:43:16.591 "is_configured": true, 00:43:16.591 "data_offset": 256, 00:43:16.591 "data_size": 7936 00:43:16.591 } 00:43:16.591 ] 00:43:16.591 }' 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:16.591 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:17.157 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:43:17.157 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:17.158 [2024-11-26 17:40:17.701006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:43:17.158 [2024-11-26 17:40:17.701226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:17.158 [2024-11-26 17:40:17.843998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:17.158 [2024-11-26 17:40:17.844157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:17.158 [2024-11-26 17:40:17.844214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:43:17.158 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87533 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87533 ']' 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87533 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87533 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87533' 00:43:17.416 killing process with pid 87533 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87533 00:43:17.416 17:40:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87533 00:43:17.416 [2024-11-26 17:40:17.944026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:17.416 [2024-11-26 17:40:17.967276] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:18.793 17:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:43:18.793 00:43:18.793 real 0m5.858s 00:43:18.793 user 0m8.066s 00:43:18.793 sys 0m1.104s 00:43:18.793 17:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:18.793 17:40:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:18.793 ************************************ 00:43:18.793 END TEST raid_state_function_test_sb_md_separate 00:43:18.793 ************************************ 00:43:19.052 17:40:19 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:43:19.053 17:40:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:19.053 17:40:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:19.053 17:40:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:19.053 ************************************ 00:43:19.053 START TEST raid_superblock_test_md_separate 00:43:19.053 ************************************ 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87791 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87791 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87791 ']' 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:19.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:19.053 17:40:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:19.053 [2024-11-26 17:40:19.653779] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:19.053 [2024-11-26 17:40:19.654441] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87791 ] 00:43:19.313 [2024-11-26 17:40:19.831859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:19.313 [2024-11-26 17:40:19.993429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:19.878 [2024-11-26 17:40:20.278612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:19.878 [2024-11-26 17:40:20.278671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.878 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.136 malloc1 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.136 [2024-11-26 17:40:20.598245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:20.136 [2024-11-26 17:40:20.598442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:20.136 [2024-11-26 17:40:20.598496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:43:20.136 [2024-11-26 17:40:20.598583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:20.136 [2024-11-26 17:40:20.601367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:20.136 [2024-11-26 17:40:20.601453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:20.136 pt1 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.136 malloc2 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.136 [2024-11-26 17:40:20.674107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:20.136 [2024-11-26 17:40:20.674295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:20.136 [2024-11-26 17:40:20.674350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:43:20.136 [2024-11-26 17:40:20.674401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:20.136 [2024-11-26 17:40:20.677023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:20.136 [2024-11-26 17:40:20.677104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:20.136 pt2 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.136 [2024-11-26 17:40:20.686086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:20.136 [2024-11-26 17:40:20.688473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:20.136 [2024-11-26 17:40:20.688709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:43:20.136 [2024-11-26 17:40:20.688736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:43:20.136 [2024-11-26 17:40:20.688826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:43:20.136 [2024-11-26 17:40:20.688971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:43:20.136 [2024-11-26 17:40:20.688985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:43:20.136 [2024-11-26 17:40:20.689111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:20.136 "name": "raid_bdev1", 00:43:20.136 "uuid": "2cd0e0f6-1d14-4c2f-b281-184ce10bf390", 00:43:20.136 "strip_size_kb": 0, 00:43:20.136 "state": "online", 00:43:20.136 "raid_level": "raid1", 00:43:20.136 "superblock": true, 00:43:20.136 "num_base_bdevs": 2, 00:43:20.136 "num_base_bdevs_discovered": 2, 00:43:20.136 "num_base_bdevs_operational": 2, 00:43:20.136 "base_bdevs_list": [ 00:43:20.136 { 00:43:20.136 "name": "pt1", 00:43:20.136 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:20.136 "is_configured": true, 00:43:20.136 "data_offset": 256, 00:43:20.136 "data_size": 7936 00:43:20.136 }, 00:43:20.136 { 00:43:20.136 "name": "pt2", 00:43:20.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:20.136 "is_configured": true, 00:43:20.136 "data_offset": 256, 00:43:20.136 "data_size": 7936 00:43:20.136 } 00:43:20.136 ] 00:43:20.136 }' 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:20.136 17:40:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:20.700 [2024-11-26 17:40:21.126092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:20.700 "name": "raid_bdev1", 00:43:20.700 "aliases": [ 00:43:20.700 "2cd0e0f6-1d14-4c2f-b281-184ce10bf390" 00:43:20.700 ], 00:43:20.700 "product_name": "Raid Volume", 00:43:20.700 "block_size": 4096, 00:43:20.700 "num_blocks": 7936, 00:43:20.700 "uuid": "2cd0e0f6-1d14-4c2f-b281-184ce10bf390", 00:43:20.700 "md_size": 32, 00:43:20.700 "md_interleave": false, 00:43:20.700 "dif_type": 0, 00:43:20.700 "assigned_rate_limits": { 00:43:20.700 "rw_ios_per_sec": 0, 00:43:20.700 "rw_mbytes_per_sec": 0, 00:43:20.700 "r_mbytes_per_sec": 0, 00:43:20.700 "w_mbytes_per_sec": 0 00:43:20.700 }, 00:43:20.700 "claimed": false, 00:43:20.700 "zoned": false, 00:43:20.700 "supported_io_types": { 00:43:20.700 "read": true, 00:43:20.700 "write": true, 00:43:20.700 "unmap": false, 00:43:20.700 "flush": false, 00:43:20.700 "reset": true, 00:43:20.700 "nvme_admin": false, 00:43:20.700 "nvme_io": false, 00:43:20.700 "nvme_io_md": false, 00:43:20.700 "write_zeroes": true, 00:43:20.700 "zcopy": false, 00:43:20.700 "get_zone_info": false, 00:43:20.700 "zone_management": false, 00:43:20.700 "zone_append": false, 00:43:20.700 "compare": false, 00:43:20.700 "compare_and_write": false, 00:43:20.700 "abort": false, 00:43:20.700 "seek_hole": false, 00:43:20.700 "seek_data": false, 00:43:20.700 "copy": false, 00:43:20.700 "nvme_iov_md": false 00:43:20.700 }, 00:43:20.700 "memory_domains": [ 00:43:20.700 { 00:43:20.700 "dma_device_id": "system", 00:43:20.700 "dma_device_type": 1 00:43:20.700 }, 00:43:20.700 { 00:43:20.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:20.700 "dma_device_type": 2 00:43:20.700 }, 00:43:20.700 { 00:43:20.700 "dma_device_id": "system", 00:43:20.700 "dma_device_type": 1 00:43:20.700 }, 00:43:20.700 { 00:43:20.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:20.700 "dma_device_type": 2 00:43:20.700 } 00:43:20.700 ], 00:43:20.700 "driver_specific": { 00:43:20.700 "raid": { 00:43:20.700 "uuid": "2cd0e0f6-1d14-4c2f-b281-184ce10bf390", 00:43:20.700 "strip_size_kb": 0, 00:43:20.700 "state": "online", 00:43:20.700 "raid_level": "raid1", 00:43:20.700 "superblock": true, 00:43:20.700 "num_base_bdevs": 2, 00:43:20.700 "num_base_bdevs_discovered": 2, 00:43:20.700 "num_base_bdevs_operational": 2, 00:43:20.700 "base_bdevs_list": [ 00:43:20.700 { 00:43:20.700 "name": "pt1", 00:43:20.700 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:20.700 "is_configured": true, 00:43:20.700 "data_offset": 256, 00:43:20.700 "data_size": 7936 00:43:20.700 }, 00:43:20.700 { 00:43:20.700 "name": "pt2", 00:43:20.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:20.700 "is_configured": true, 00:43:20.700 "data_offset": 256, 00:43:20.700 "data_size": 7936 00:43:20.700 } 00:43:20.700 ] 00:43:20.700 } 00:43:20.700 } 00:43:20.700 }' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:43:20.700 pt2' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.700 [2024-11-26 17:40:21.361685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2cd0e0f6-1d14-4c2f-b281-184ce10bf390 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 2cd0e0f6-1d14-4c2f-b281-184ce10bf390 ']' 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.700 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.958 [2024-11-26 17:40:21.397262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:20.958 [2024-11-26 17:40:21.397365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:20.958 [2024-11-26 17:40:21.397525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:20.958 [2024-11-26 17:40:21.397639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:20.958 [2024-11-26 17:40:21.397695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.958 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.959 [2024-11-26 17:40:21.521061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:43:20.959 [2024-11-26 17:40:21.523660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:43:20.959 [2024-11-26 17:40:21.523803] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:43:20.959 [2024-11-26 17:40:21.523918] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:43:20.959 [2024-11-26 17:40:21.523979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:20.959 [2024-11-26 17:40:21.524010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:43:20.959 request: 00:43:20.959 { 00:43:20.959 "name": "raid_bdev1", 00:43:20.959 "raid_level": "raid1", 00:43:20.959 "base_bdevs": [ 00:43:20.959 "malloc1", 00:43:20.959 "malloc2" 00:43:20.959 ], 00:43:20.959 "superblock": false, 00:43:20.959 "method": "bdev_raid_create", 00:43:20.959 "req_id": 1 00:43:20.959 } 00:43:20.959 Got JSON-RPC error response 00:43:20.959 response: 00:43:20.959 { 00:43:20.959 "code": -17, 00:43:20.959 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:43:20.959 } 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.959 [2024-11-26 17:40:21.584940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:20.959 [2024-11-26 17:40:21.585007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:20.959 [2024-11-26 17:40:21.585028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:43:20.959 [2024-11-26 17:40:21.585042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:20.959 [2024-11-26 17:40:21.587729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:20.959 [2024-11-26 17:40:21.587772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:20.959 [2024-11-26 17:40:21.587834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:43:20.959 [2024-11-26 17:40:21.587903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:20.959 pt1 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:20.959 "name": "raid_bdev1", 00:43:20.959 "uuid": "2cd0e0f6-1d14-4c2f-b281-184ce10bf390", 00:43:20.959 "strip_size_kb": 0, 00:43:20.959 "state": "configuring", 00:43:20.959 "raid_level": "raid1", 00:43:20.959 "superblock": true, 00:43:20.959 "num_base_bdevs": 2, 00:43:20.959 "num_base_bdevs_discovered": 1, 00:43:20.959 "num_base_bdevs_operational": 2, 00:43:20.959 "base_bdevs_list": [ 00:43:20.959 { 00:43:20.959 "name": "pt1", 00:43:20.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:20.959 "is_configured": true, 00:43:20.959 "data_offset": 256, 00:43:20.959 "data_size": 7936 00:43:20.959 }, 00:43:20.959 { 00:43:20.959 "name": null, 00:43:20.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:20.959 "is_configured": false, 00:43:20.959 "data_offset": 256, 00:43:20.959 "data_size": 7936 00:43:20.959 } 00:43:20.959 ] 00:43:20.959 }' 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:20.959 17:40:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:21.526 [2024-11-26 17:40:22.040699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:21.526 [2024-11-26 17:40:22.040907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:21.526 [2024-11-26 17:40:22.040961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:43:21.526 [2024-11-26 17:40:22.041005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:21.526 [2024-11-26 17:40:22.041345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:21.526 [2024-11-26 17:40:22.041406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:21.526 [2024-11-26 17:40:22.041504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:43:21.526 [2024-11-26 17:40:22.041583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:21.526 [2024-11-26 17:40:22.041786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:43:21.526 [2024-11-26 17:40:22.041833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:43:21.526 [2024-11-26 17:40:22.041956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:43:21.526 [2024-11-26 17:40:22.042140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:43:21.526 [2024-11-26 17:40:22.042183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:43:21.526 [2024-11-26 17:40:22.042374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:21.526 pt2 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:21.526 "name": "raid_bdev1", 00:43:21.526 "uuid": "2cd0e0f6-1d14-4c2f-b281-184ce10bf390", 00:43:21.526 "strip_size_kb": 0, 00:43:21.526 "state": "online", 00:43:21.526 "raid_level": "raid1", 00:43:21.526 "superblock": true, 00:43:21.526 "num_base_bdevs": 2, 00:43:21.526 "num_base_bdevs_discovered": 2, 00:43:21.526 "num_base_bdevs_operational": 2, 00:43:21.526 "base_bdevs_list": [ 00:43:21.526 { 00:43:21.526 "name": "pt1", 00:43:21.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:21.526 "is_configured": true, 00:43:21.526 "data_offset": 256, 00:43:21.526 "data_size": 7936 00:43:21.526 }, 00:43:21.526 { 00:43:21.526 "name": "pt2", 00:43:21.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:21.526 "is_configured": true, 00:43:21.526 "data_offset": 256, 00:43:21.526 "data_size": 7936 00:43:21.526 } 00:43:21.526 ] 00:43:21.526 }' 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:21.526 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:21.783 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:43:21.783 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:43:21.783 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:21.783 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:21.784 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:43:21.784 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:21.784 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:21.784 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:21.784 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.784 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:21.784 [2024-11-26 17:40:22.440766] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:21.784 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.784 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:21.784 "name": "raid_bdev1", 00:43:21.784 "aliases": [ 00:43:21.784 "2cd0e0f6-1d14-4c2f-b281-184ce10bf390" 00:43:21.784 ], 00:43:21.784 "product_name": "Raid Volume", 00:43:21.784 "block_size": 4096, 00:43:21.784 "num_blocks": 7936, 00:43:21.784 "uuid": "2cd0e0f6-1d14-4c2f-b281-184ce10bf390", 00:43:21.784 "md_size": 32, 00:43:21.784 "md_interleave": false, 00:43:21.784 "dif_type": 0, 00:43:21.784 "assigned_rate_limits": { 00:43:21.784 "rw_ios_per_sec": 0, 00:43:21.784 "rw_mbytes_per_sec": 0, 00:43:21.784 "r_mbytes_per_sec": 0, 00:43:21.784 "w_mbytes_per_sec": 0 00:43:21.784 }, 00:43:21.784 "claimed": false, 00:43:21.784 "zoned": false, 00:43:21.784 "supported_io_types": { 00:43:21.784 "read": true, 00:43:21.784 "write": true, 00:43:21.784 "unmap": false, 00:43:21.784 "flush": false, 00:43:21.784 "reset": true, 00:43:21.784 "nvme_admin": false, 00:43:21.784 "nvme_io": false, 00:43:21.784 "nvme_io_md": false, 00:43:21.784 "write_zeroes": true, 00:43:21.784 "zcopy": false, 00:43:21.784 "get_zone_info": false, 00:43:21.784 "zone_management": false, 00:43:21.784 "zone_append": false, 00:43:21.784 "compare": false, 00:43:21.784 "compare_and_write": false, 00:43:21.784 "abort": false, 00:43:21.784 "seek_hole": false, 00:43:21.784 "seek_data": false, 00:43:21.784 "copy": false, 00:43:21.784 "nvme_iov_md": false 00:43:21.784 }, 00:43:21.784 "memory_domains": [ 00:43:21.784 { 00:43:21.784 "dma_device_id": "system", 00:43:21.784 "dma_device_type": 1 00:43:21.784 }, 00:43:21.784 { 00:43:21.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:21.784 "dma_device_type": 2 00:43:21.784 }, 00:43:21.784 { 00:43:21.784 "dma_device_id": "system", 00:43:21.784 "dma_device_type": 1 00:43:21.784 }, 00:43:21.784 { 00:43:21.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:21.784 "dma_device_type": 2 00:43:21.784 } 00:43:21.784 ], 00:43:21.784 "driver_specific": { 00:43:21.784 "raid": { 00:43:21.784 "uuid": "2cd0e0f6-1d14-4c2f-b281-184ce10bf390", 00:43:21.784 "strip_size_kb": 0, 00:43:21.784 "state": "online", 00:43:21.784 "raid_level": "raid1", 00:43:21.784 "superblock": true, 00:43:21.784 "num_base_bdevs": 2, 00:43:21.784 "num_base_bdevs_discovered": 2, 00:43:21.784 "num_base_bdevs_operational": 2, 00:43:21.784 "base_bdevs_list": [ 00:43:21.784 { 00:43:21.784 "name": "pt1", 00:43:21.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:21.784 "is_configured": true, 00:43:21.784 "data_offset": 256, 00:43:21.784 "data_size": 7936 00:43:21.784 }, 00:43:21.784 { 00:43:21.784 "name": "pt2", 00:43:21.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:21.784 "is_configured": true, 00:43:21.784 "data_offset": 256, 00:43:21.784 "data_size": 7936 00:43:21.784 } 00:43:21.784 ] 00:43:21.784 } 00:43:21.784 } 00:43:21.784 }' 00:43:21.784 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:43:22.042 pt2' 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:43:22.042 [2024-11-26 17:40:22.672294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 2cd0e0f6-1d14-4c2f-b281-184ce10bf390 '!=' 2cd0e0f6-1d14-4c2f-b281-184ce10bf390 ']' 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.042 [2024-11-26 17:40:22.719941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.042 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:22.301 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.301 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:22.301 "name": "raid_bdev1", 00:43:22.301 "uuid": "2cd0e0f6-1d14-4c2f-b281-184ce10bf390", 00:43:22.301 "strip_size_kb": 0, 00:43:22.301 "state": "online", 00:43:22.301 "raid_level": "raid1", 00:43:22.301 "superblock": true, 00:43:22.301 "num_base_bdevs": 2, 00:43:22.301 "num_base_bdevs_discovered": 1, 00:43:22.301 "num_base_bdevs_operational": 1, 00:43:22.301 "base_bdevs_list": [ 00:43:22.301 { 00:43:22.301 "name": null, 00:43:22.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:22.301 "is_configured": false, 00:43:22.301 "data_offset": 0, 00:43:22.301 "data_size": 7936 00:43:22.301 }, 00:43:22.301 { 00:43:22.301 "name": "pt2", 00:43:22.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:22.302 "is_configured": true, 00:43:22.302 "data_offset": 256, 00:43:22.302 "data_size": 7936 00:43:22.302 } 00:43:22.302 ] 00:43:22.302 }' 00:43:22.302 17:40:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:22.302 17:40:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.570 [2024-11-26 17:40:23.203192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:22.570 [2024-11-26 17:40:23.203330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:22.570 [2024-11-26 17:40:23.203468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:22.570 [2024-11-26 17:40:23.203589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:22.570 [2024-11-26 17:40:23.203649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.570 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.829 [2024-11-26 17:40:23.275043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:22.829 [2024-11-26 17:40:23.275118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:22.829 [2024-11-26 17:40:23.275139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:43:22.829 [2024-11-26 17:40:23.275155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:22.829 [2024-11-26 17:40:23.277878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:22.829 [2024-11-26 17:40:23.277925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:22.829 [2024-11-26 17:40:23.277991] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:43:22.829 [2024-11-26 17:40:23.278052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:22.829 [2024-11-26 17:40:23.278181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:43:22.829 [2024-11-26 17:40:23.278197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:43:22.829 [2024-11-26 17:40:23.278297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:43:22.829 [2024-11-26 17:40:23.278453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:43:22.829 [2024-11-26 17:40:23.278463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:43:22.829 [2024-11-26 17:40:23.278619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:22.829 pt2 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:22.829 "name": "raid_bdev1", 00:43:22.829 "uuid": "2cd0e0f6-1d14-4c2f-b281-184ce10bf390", 00:43:22.829 "strip_size_kb": 0, 00:43:22.829 "state": "online", 00:43:22.829 "raid_level": "raid1", 00:43:22.829 "superblock": true, 00:43:22.829 "num_base_bdevs": 2, 00:43:22.829 "num_base_bdevs_discovered": 1, 00:43:22.829 "num_base_bdevs_operational": 1, 00:43:22.829 "base_bdevs_list": [ 00:43:22.829 { 00:43:22.829 "name": null, 00:43:22.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:22.829 "is_configured": false, 00:43:22.829 "data_offset": 256, 00:43:22.829 "data_size": 7936 00:43:22.829 }, 00:43:22.829 { 00:43:22.829 "name": "pt2", 00:43:22.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:22.829 "is_configured": true, 00:43:22.829 "data_offset": 256, 00:43:22.829 "data_size": 7936 00:43:22.829 } 00:43:22.829 ] 00:43:22.829 }' 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:22.829 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:23.088 [2024-11-26 17:40:23.702382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:23.088 [2024-11-26 17:40:23.702524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:23.088 [2024-11-26 17:40:23.702662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:23.088 [2024-11-26 17:40:23.702751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:23.088 [2024-11-26 17:40:23.702863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:23.088 [2024-11-26 17:40:23.766291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:23.088 [2024-11-26 17:40:23.766415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:23.088 [2024-11-26 17:40:23.766461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:43:23.088 [2024-11-26 17:40:23.766522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:23.088 [2024-11-26 17:40:23.769216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:23.088 [2024-11-26 17:40:23.769309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:23.088 [2024-11-26 17:40:23.769414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:43:23.088 [2024-11-26 17:40:23.769498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:23.088 [2024-11-26 17:40:23.769737] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:43:23.088 [2024-11-26 17:40:23.769806] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:23.088 [2024-11-26 17:40:23.769870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:43:23.088 [2024-11-26 17:40:23.770026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:23.088 [2024-11-26 17:40:23.770155] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:43:23.088 [2024-11-26 17:40:23.770197] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:43:23.088 [2024-11-26 17:40:23.770296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:43:23.088 pt1 00:43:23.088 [2024-11-26 17:40:23.770461] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:43:23.088 [2024-11-26 17:40:23.770478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:43:23.088 [2024-11-26 17:40:23.770658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:23.088 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:23.089 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:23.089 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:23.089 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:23.089 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.089 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:23.347 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.347 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:23.347 "name": "raid_bdev1", 00:43:23.347 "uuid": "2cd0e0f6-1d14-4c2f-b281-184ce10bf390", 00:43:23.347 "strip_size_kb": 0, 00:43:23.347 "state": "online", 00:43:23.347 "raid_level": "raid1", 00:43:23.347 "superblock": true, 00:43:23.347 "num_base_bdevs": 2, 00:43:23.347 "num_base_bdevs_discovered": 1, 00:43:23.347 "num_base_bdevs_operational": 1, 00:43:23.347 "base_bdevs_list": [ 00:43:23.347 { 00:43:23.347 "name": null, 00:43:23.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:23.347 "is_configured": false, 00:43:23.347 "data_offset": 256, 00:43:23.347 "data_size": 7936 00:43:23.347 }, 00:43:23.347 { 00:43:23.347 "name": "pt2", 00:43:23.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:23.347 "is_configured": true, 00:43:23.347 "data_offset": 256, 00:43:23.347 "data_size": 7936 00:43:23.347 } 00:43:23.347 ] 00:43:23.347 }' 00:43:23.347 17:40:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:23.347 17:40:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:23.606 17:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:43:23.606 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.606 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:23.606 17:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:43:23.606 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.606 17:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:43:23.606 17:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:23.606 17:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:43:23.606 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.606 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:23.606 [2024-11-26 17:40:24.285734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 2cd0e0f6-1d14-4c2f-b281-184ce10bf390 '!=' 2cd0e0f6-1d14-4c2f-b281-184ce10bf390 ']' 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87791 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87791 ']' 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87791 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87791 00:43:23.865 killing process with pid 87791 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87791' 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87791 00:43:23.865 [2024-11-26 17:40:24.367480] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:23.865 [2024-11-26 17:40:24.367619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:23.865 17:40:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87791 00:43:23.866 [2024-11-26 17:40:24.367680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:23.866 [2024-11-26 17:40:24.367702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:43:24.125 [2024-11-26 17:40:24.607801] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:25.504 17:40:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:43:25.504 00:43:25.504 real 0m6.266s 00:43:25.504 user 0m9.283s 00:43:25.504 sys 0m1.246s 00:43:25.504 17:40:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:25.504 17:40:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:25.504 ************************************ 00:43:25.504 END TEST raid_superblock_test_md_separate 00:43:25.504 ************************************ 00:43:25.504 17:40:25 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:43:25.504 17:40:25 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:43:25.504 17:40:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:43:25.504 17:40:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:25.504 17:40:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:25.504 ************************************ 00:43:25.504 START TEST raid_rebuild_test_sb_md_separate 00:43:25.504 ************************************ 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88118 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88118 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88118 ']' 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:25.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:25.504 17:40:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:25.504 [2024-11-26 17:40:26.010777] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:25.504 [2024-11-26 17:40:26.011020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88118 ] 00:43:25.504 I/O size of 3145728 is greater than zero copy threshold (65536). 00:43:25.504 Zero copy mechanism will not be used. 00:43:25.504 [2024-11-26 17:40:26.194064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:25.764 [2024-11-26 17:40:26.335045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:26.023 [2024-11-26 17:40:26.569734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:26.023 [2024-11-26 17:40:26.569822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:26.283 BaseBdev1_malloc 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:26.283 [2024-11-26 17:40:26.904266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:43:26.283 [2024-11-26 17:40:26.904353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:26.283 [2024-11-26 17:40:26.904379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:43:26.283 [2024-11-26 17:40:26.904393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:26.283 [2024-11-26 17:40:26.906617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:26.283 [2024-11-26 17:40:26.906655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:43:26.283 BaseBdev1 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:26.283 BaseBdev2_malloc 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:26.283 [2024-11-26 17:40:26.968388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:43:26.283 [2024-11-26 17:40:26.968592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:26.283 [2024-11-26 17:40:26.968622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:43:26.283 [2024-11-26 17:40:26.968636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:26.283 [2024-11-26 17:40:26.970869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:26.283 [2024-11-26 17:40:26.970908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:43:26.283 BaseBdev2 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.283 17:40:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:26.542 spare_malloc 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:26.542 spare_delay 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:26.542 [2024-11-26 17:40:27.054898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:26.542 [2024-11-26 17:40:27.054970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:26.542 [2024-11-26 17:40:27.054991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:43:26.542 [2024-11-26 17:40:27.055004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:26.542 [2024-11-26 17:40:27.057284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:26.542 [2024-11-26 17:40:27.057372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:26.542 spare 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.542 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:26.542 [2024-11-26 17:40:27.066922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:26.542 [2024-11-26 17:40:27.069004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:26.542 [2024-11-26 17:40:27.069195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:43:26.542 [2024-11-26 17:40:27.069211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:43:26.542 [2024-11-26 17:40:27.069293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:43:26.543 [2024-11-26 17:40:27.069431] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:43:26.543 [2024-11-26 17:40:27.069441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:43:26.543 [2024-11-26 17:40:27.069558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:26.543 "name": "raid_bdev1", 00:43:26.543 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:26.543 "strip_size_kb": 0, 00:43:26.543 "state": "online", 00:43:26.543 "raid_level": "raid1", 00:43:26.543 "superblock": true, 00:43:26.543 "num_base_bdevs": 2, 00:43:26.543 "num_base_bdevs_discovered": 2, 00:43:26.543 "num_base_bdevs_operational": 2, 00:43:26.543 "base_bdevs_list": [ 00:43:26.543 { 00:43:26.543 "name": "BaseBdev1", 00:43:26.543 "uuid": "46b36616-6d51-585e-83a0-b60f20aefdfd", 00:43:26.543 "is_configured": true, 00:43:26.543 "data_offset": 256, 00:43:26.543 "data_size": 7936 00:43:26.543 }, 00:43:26.543 { 00:43:26.543 "name": "BaseBdev2", 00:43:26.543 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:26.543 "is_configured": true, 00:43:26.543 "data_offset": 256, 00:43:26.543 "data_size": 7936 00:43:26.543 } 00:43:26.543 ] 00:43:26.543 }' 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:26.543 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:27.111 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:27.111 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.111 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:27.111 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:43:27.111 [2024-11-26 17:40:27.554457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:27.111 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.111 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:27.112 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:43:27.371 [2024-11-26 17:40:27.837753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:43:27.371 /dev/nbd0 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:27.371 1+0 records in 00:43:27.371 1+0 records out 00:43:27.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045205 s, 9.1 MB/s 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:27.371 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:43:27.372 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:27.372 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:27.372 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:43:27.372 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:43:27.372 17:40:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:43:27.940 7936+0 records in 00:43:27.940 7936+0 records out 00:43:27.940 32505856 bytes (33 MB, 31 MiB) copied, 0.70069 s, 46.4 MB/s 00:43:27.940 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:43:27.940 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:43:27.940 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:27.940 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:27.940 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:43:27.940 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:27.940 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:28.198 [2024-11-26 17:40:28.834879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:28.198 [2024-11-26 17:40:28.858966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:28.198 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.457 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:28.457 "name": "raid_bdev1", 00:43:28.457 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:28.457 "strip_size_kb": 0, 00:43:28.457 "state": "online", 00:43:28.457 "raid_level": "raid1", 00:43:28.457 "superblock": true, 00:43:28.457 "num_base_bdevs": 2, 00:43:28.457 "num_base_bdevs_discovered": 1, 00:43:28.457 "num_base_bdevs_operational": 1, 00:43:28.457 "base_bdevs_list": [ 00:43:28.457 { 00:43:28.457 "name": null, 00:43:28.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:28.457 "is_configured": false, 00:43:28.457 "data_offset": 0, 00:43:28.457 "data_size": 7936 00:43:28.457 }, 00:43:28.457 { 00:43:28.457 "name": "BaseBdev2", 00:43:28.457 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:28.457 "is_configured": true, 00:43:28.457 "data_offset": 256, 00:43:28.457 "data_size": 7936 00:43:28.457 } 00:43:28.457 ] 00:43:28.457 }' 00:43:28.457 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:28.457 17:40:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:28.722 17:40:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:28.722 17:40:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.722 17:40:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:28.722 [2024-11-26 17:40:29.298251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:28.722 [2024-11-26 17:40:29.314273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:43:28.722 17:40:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.722 17:40:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:43:28.722 [2024-11-26 17:40:29.316465] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:29.677 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:29.677 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:29.677 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:29.677 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:29.677 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:29.677 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:29.677 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.677 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:29.677 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:29.677 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:29.936 "name": "raid_bdev1", 00:43:29.936 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:29.936 "strip_size_kb": 0, 00:43:29.936 "state": "online", 00:43:29.936 "raid_level": "raid1", 00:43:29.936 "superblock": true, 00:43:29.936 "num_base_bdevs": 2, 00:43:29.936 "num_base_bdevs_discovered": 2, 00:43:29.936 "num_base_bdevs_operational": 2, 00:43:29.936 "process": { 00:43:29.936 "type": "rebuild", 00:43:29.936 "target": "spare", 00:43:29.936 "progress": { 00:43:29.936 "blocks": 2560, 00:43:29.936 "percent": 32 00:43:29.936 } 00:43:29.936 }, 00:43:29.936 "base_bdevs_list": [ 00:43:29.936 { 00:43:29.936 "name": "spare", 00:43:29.936 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:29.936 "is_configured": true, 00:43:29.936 "data_offset": 256, 00:43:29.936 "data_size": 7936 00:43:29.936 }, 00:43:29.936 { 00:43:29.936 "name": "BaseBdev2", 00:43:29.936 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:29.936 "is_configured": true, 00:43:29.936 "data_offset": 256, 00:43:29.936 "data_size": 7936 00:43:29.936 } 00:43:29.936 ] 00:43:29.936 }' 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:29.936 [2024-11-26 17:40:30.485119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:29.936 [2024-11-26 17:40:30.527053] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:29.936 [2024-11-26 17:40:30.527162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:29.936 [2024-11-26 17:40:30.527181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:29.936 [2024-11-26 17:40:30.527193] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:29.936 "name": "raid_bdev1", 00:43:29.936 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:29.936 "strip_size_kb": 0, 00:43:29.936 "state": "online", 00:43:29.936 "raid_level": "raid1", 00:43:29.936 "superblock": true, 00:43:29.936 "num_base_bdevs": 2, 00:43:29.936 "num_base_bdevs_discovered": 1, 00:43:29.936 "num_base_bdevs_operational": 1, 00:43:29.936 "base_bdevs_list": [ 00:43:29.936 { 00:43:29.936 "name": null, 00:43:29.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:29.936 "is_configured": false, 00:43:29.936 "data_offset": 0, 00:43:29.936 "data_size": 7936 00:43:29.936 }, 00:43:29.936 { 00:43:29.936 "name": "BaseBdev2", 00:43:29.936 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:29.936 "is_configured": true, 00:43:29.936 "data_offset": 256, 00:43:29.936 "data_size": 7936 00:43:29.936 } 00:43:29.936 ] 00:43:29.936 }' 00:43:29.936 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:29.937 17:40:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:30.504 "name": "raid_bdev1", 00:43:30.504 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:30.504 "strip_size_kb": 0, 00:43:30.504 "state": "online", 00:43:30.504 "raid_level": "raid1", 00:43:30.504 "superblock": true, 00:43:30.504 "num_base_bdevs": 2, 00:43:30.504 "num_base_bdevs_discovered": 1, 00:43:30.504 "num_base_bdevs_operational": 1, 00:43:30.504 "base_bdevs_list": [ 00:43:30.504 { 00:43:30.504 "name": null, 00:43:30.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:30.504 "is_configured": false, 00:43:30.504 "data_offset": 0, 00:43:30.504 "data_size": 7936 00:43:30.504 }, 00:43:30.504 { 00:43:30.504 "name": "BaseBdev2", 00:43:30.504 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:30.504 "is_configured": true, 00:43:30.504 "data_offset": 256, 00:43:30.504 "data_size": 7936 00:43:30.504 } 00:43:30.504 ] 00:43:30.504 }' 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.504 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:30.763 [2024-11-26 17:40:31.200777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:30.763 [2024-11-26 17:40:31.217899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:43:30.763 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.763 17:40:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:43:30.763 [2024-11-26 17:40:31.220396] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:31.700 "name": "raid_bdev1", 00:43:31.700 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:31.700 "strip_size_kb": 0, 00:43:31.700 "state": "online", 00:43:31.700 "raid_level": "raid1", 00:43:31.700 "superblock": true, 00:43:31.700 "num_base_bdevs": 2, 00:43:31.700 "num_base_bdevs_discovered": 2, 00:43:31.700 "num_base_bdevs_operational": 2, 00:43:31.700 "process": { 00:43:31.700 "type": "rebuild", 00:43:31.700 "target": "spare", 00:43:31.700 "progress": { 00:43:31.700 "blocks": 2560, 00:43:31.700 "percent": 32 00:43:31.700 } 00:43:31.700 }, 00:43:31.700 "base_bdevs_list": [ 00:43:31.700 { 00:43:31.700 "name": "spare", 00:43:31.700 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:31.700 "is_configured": true, 00:43:31.700 "data_offset": 256, 00:43:31.700 "data_size": 7936 00:43:31.700 }, 00:43:31.700 { 00:43:31.700 "name": "BaseBdev2", 00:43:31.700 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:31.700 "is_configured": true, 00:43:31.700 "data_offset": 256, 00:43:31.700 "data_size": 7936 00:43:31.700 } 00:43:31.700 ] 00:43:31.700 }' 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:43:31.700 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=727 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:31.700 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.960 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:31.960 "name": "raid_bdev1", 00:43:31.960 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:31.960 "strip_size_kb": 0, 00:43:31.960 "state": "online", 00:43:31.960 "raid_level": "raid1", 00:43:31.960 "superblock": true, 00:43:31.960 "num_base_bdevs": 2, 00:43:31.960 "num_base_bdevs_discovered": 2, 00:43:31.960 "num_base_bdevs_operational": 2, 00:43:31.960 "process": { 00:43:31.960 "type": "rebuild", 00:43:31.960 "target": "spare", 00:43:31.960 "progress": { 00:43:31.960 "blocks": 2816, 00:43:31.960 "percent": 35 00:43:31.960 } 00:43:31.960 }, 00:43:31.960 "base_bdevs_list": [ 00:43:31.960 { 00:43:31.960 "name": "spare", 00:43:31.960 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:31.960 "is_configured": true, 00:43:31.960 "data_offset": 256, 00:43:31.960 "data_size": 7936 00:43:31.960 }, 00:43:31.960 { 00:43:31.960 "name": "BaseBdev2", 00:43:31.961 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:31.961 "is_configured": true, 00:43:31.961 "data_offset": 256, 00:43:31.961 "data_size": 7936 00:43:31.961 } 00:43:31.961 ] 00:43:31.961 }' 00:43:31.961 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:31.961 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:31.961 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:31.961 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:31.961 17:40:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:43:32.900 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:32.900 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:32.900 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:32.900 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:32.900 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:32.901 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:32.901 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:32.901 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.901 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:32.901 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:32.901 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.901 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:32.901 "name": "raid_bdev1", 00:43:32.901 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:32.901 "strip_size_kb": 0, 00:43:32.901 "state": "online", 00:43:32.901 "raid_level": "raid1", 00:43:32.901 "superblock": true, 00:43:32.901 "num_base_bdevs": 2, 00:43:32.901 "num_base_bdevs_discovered": 2, 00:43:32.901 "num_base_bdevs_operational": 2, 00:43:32.901 "process": { 00:43:32.901 "type": "rebuild", 00:43:32.901 "target": "spare", 00:43:32.901 "progress": { 00:43:32.901 "blocks": 5632, 00:43:32.901 "percent": 70 00:43:32.901 } 00:43:32.901 }, 00:43:32.901 "base_bdevs_list": [ 00:43:32.901 { 00:43:32.901 "name": "spare", 00:43:32.901 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:32.901 "is_configured": true, 00:43:32.901 "data_offset": 256, 00:43:32.901 "data_size": 7936 00:43:32.901 }, 00:43:32.901 { 00:43:32.901 "name": "BaseBdev2", 00:43:32.901 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:32.901 "is_configured": true, 00:43:32.901 "data_offset": 256, 00:43:32.901 "data_size": 7936 00:43:32.901 } 00:43:32.901 ] 00:43:32.901 }' 00:43:32.901 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:33.166 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:33.166 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:33.166 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:33.166 17:40:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:43:33.739 [2024-11-26 17:40:34.348489] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:43:33.739 [2024-11-26 17:40:34.348763] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:43:33.739 [2024-11-26 17:40:34.348961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:34.000 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:43:34.000 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:34.000 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:34.000 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:34.000 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:34.000 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:34.000 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:34.000 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:34.000 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.000 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:34.260 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.260 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:34.260 "name": "raid_bdev1", 00:43:34.260 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:34.260 "strip_size_kb": 0, 00:43:34.260 "state": "online", 00:43:34.260 "raid_level": "raid1", 00:43:34.260 "superblock": true, 00:43:34.260 "num_base_bdevs": 2, 00:43:34.260 "num_base_bdevs_discovered": 2, 00:43:34.260 "num_base_bdevs_operational": 2, 00:43:34.260 "base_bdevs_list": [ 00:43:34.260 { 00:43:34.260 "name": "spare", 00:43:34.260 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:34.260 "is_configured": true, 00:43:34.260 "data_offset": 256, 00:43:34.260 "data_size": 7936 00:43:34.260 }, 00:43:34.260 { 00:43:34.260 "name": "BaseBdev2", 00:43:34.260 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:34.260 "is_configured": true, 00:43:34.260 "data_offset": 256, 00:43:34.260 "data_size": 7936 00:43:34.260 } 00:43:34.260 ] 00:43:34.260 }' 00:43:34.260 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:34.260 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:43:34.260 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:34.261 "name": "raid_bdev1", 00:43:34.261 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:34.261 "strip_size_kb": 0, 00:43:34.261 "state": "online", 00:43:34.261 "raid_level": "raid1", 00:43:34.261 "superblock": true, 00:43:34.261 "num_base_bdevs": 2, 00:43:34.261 "num_base_bdevs_discovered": 2, 00:43:34.261 "num_base_bdevs_operational": 2, 00:43:34.261 "base_bdevs_list": [ 00:43:34.261 { 00:43:34.261 "name": "spare", 00:43:34.261 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:34.261 "is_configured": true, 00:43:34.261 "data_offset": 256, 00:43:34.261 "data_size": 7936 00:43:34.261 }, 00:43:34.261 { 00:43:34.261 "name": "BaseBdev2", 00:43:34.261 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:34.261 "is_configured": true, 00:43:34.261 "data_offset": 256, 00:43:34.261 "data_size": 7936 00:43:34.261 } 00:43:34.261 ] 00:43:34.261 }' 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:34.261 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:34.522 17:40:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.522 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:34.522 "name": "raid_bdev1", 00:43:34.522 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:34.522 "strip_size_kb": 0, 00:43:34.522 "state": "online", 00:43:34.522 "raid_level": "raid1", 00:43:34.522 "superblock": true, 00:43:34.522 "num_base_bdevs": 2, 00:43:34.522 "num_base_bdevs_discovered": 2, 00:43:34.522 "num_base_bdevs_operational": 2, 00:43:34.522 "base_bdevs_list": [ 00:43:34.522 { 00:43:34.522 "name": "spare", 00:43:34.522 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:34.522 "is_configured": true, 00:43:34.522 "data_offset": 256, 00:43:34.522 "data_size": 7936 00:43:34.522 }, 00:43:34.522 { 00:43:34.522 "name": "BaseBdev2", 00:43:34.522 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:34.522 "is_configured": true, 00:43:34.522 "data_offset": 256, 00:43:34.522 "data_size": 7936 00:43:34.522 } 00:43:34.522 ] 00:43:34.522 }' 00:43:34.522 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:34.522 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:34.781 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:34.781 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.781 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:34.781 [2024-11-26 17:40:35.456717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:34.781 [2024-11-26 17:40:35.456850] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:34.781 [2024-11-26 17:40:35.456984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:34.781 [2024-11-26 17:40:35.457085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:34.781 [2024-11-26 17:40:35.457152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:43:34.782 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.782 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:34.782 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:43:34.782 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.782 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:35.041 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:43:35.041 /dev/nbd0 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:35.302 1+0 records in 00:43:35.302 1+0 records out 00:43:35.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519231 s, 7.9 MB/s 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:35.302 17:40:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:43:35.302 /dev/nbd1 00:43:35.561 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:35.561 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:35.561 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:43:35.561 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:43:35.561 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:35.561 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:35.562 1+0 records in 00:43:35.562 1+0 records out 00:43:35.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440819 s, 9.3 MB/s 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:35.562 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:43:35.821 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:35.821 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:35.821 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:35.821 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:35.821 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:35.821 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:35.821 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:43:35.821 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:43:35.821 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:35.821 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:36.081 [2024-11-26 17:40:36.746550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:36.081 [2024-11-26 17:40:36.746619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:36.081 [2024-11-26 17:40:36.746645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:43:36.081 [2024-11-26 17:40:36.746655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:36.081 [2024-11-26 17:40:36.749189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:36.081 [2024-11-26 17:40:36.749230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:36.081 [2024-11-26 17:40:36.749306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:43:36.081 [2024-11-26 17:40:36.749372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:36.081 [2024-11-26 17:40:36.749550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:36.081 spare 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.081 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:36.341 [2024-11-26 17:40:36.849463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:43:36.341 [2024-11-26 17:40:36.849526] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:43:36.341 [2024-11-26 17:40:36.849701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:43:36.341 [2024-11-26 17:40:36.849919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:43:36.341 [2024-11-26 17:40:36.849937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:43:36.341 [2024-11-26 17:40:36.850100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:36.341 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.341 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:36.342 "name": "raid_bdev1", 00:43:36.342 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:36.342 "strip_size_kb": 0, 00:43:36.342 "state": "online", 00:43:36.342 "raid_level": "raid1", 00:43:36.342 "superblock": true, 00:43:36.342 "num_base_bdevs": 2, 00:43:36.342 "num_base_bdevs_discovered": 2, 00:43:36.342 "num_base_bdevs_operational": 2, 00:43:36.342 "base_bdevs_list": [ 00:43:36.342 { 00:43:36.342 "name": "spare", 00:43:36.342 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:36.342 "is_configured": true, 00:43:36.342 "data_offset": 256, 00:43:36.342 "data_size": 7936 00:43:36.342 }, 00:43:36.342 { 00:43:36.342 "name": "BaseBdev2", 00:43:36.342 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:36.342 "is_configured": true, 00:43:36.342 "data_offset": 256, 00:43:36.342 "data_size": 7936 00:43:36.342 } 00:43:36.342 ] 00:43:36.342 }' 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:36.342 17:40:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:36.602 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:36.602 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:36.602 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:36.602 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:36.602 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:36.602 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:36.862 "name": "raid_bdev1", 00:43:36.862 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:36.862 "strip_size_kb": 0, 00:43:36.862 "state": "online", 00:43:36.862 "raid_level": "raid1", 00:43:36.862 "superblock": true, 00:43:36.862 "num_base_bdevs": 2, 00:43:36.862 "num_base_bdevs_discovered": 2, 00:43:36.862 "num_base_bdevs_operational": 2, 00:43:36.862 "base_bdevs_list": [ 00:43:36.862 { 00:43:36.862 "name": "spare", 00:43:36.862 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:36.862 "is_configured": true, 00:43:36.862 "data_offset": 256, 00:43:36.862 "data_size": 7936 00:43:36.862 }, 00:43:36.862 { 00:43:36.862 "name": "BaseBdev2", 00:43:36.862 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:36.862 "is_configured": true, 00:43:36.862 "data_offset": 256, 00:43:36.862 "data_size": 7936 00:43:36.862 } 00:43:36.862 ] 00:43:36.862 }' 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:36.862 [2024-11-26 17:40:37.497389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:36.862 "name": "raid_bdev1", 00:43:36.862 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:36.862 "strip_size_kb": 0, 00:43:36.862 "state": "online", 00:43:36.862 "raid_level": "raid1", 00:43:36.862 "superblock": true, 00:43:36.862 "num_base_bdevs": 2, 00:43:36.862 "num_base_bdevs_discovered": 1, 00:43:36.862 "num_base_bdevs_operational": 1, 00:43:36.862 "base_bdevs_list": [ 00:43:36.862 { 00:43:36.862 "name": null, 00:43:36.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:36.862 "is_configured": false, 00:43:36.862 "data_offset": 0, 00:43:36.862 "data_size": 7936 00:43:36.862 }, 00:43:36.862 { 00:43:36.862 "name": "BaseBdev2", 00:43:36.862 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:36.862 "is_configured": true, 00:43:36.862 "data_offset": 256, 00:43:36.862 "data_size": 7936 00:43:36.862 } 00:43:36.862 ] 00:43:36.862 }' 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:36.862 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:37.432 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:43:37.432 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.433 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:37.433 [2024-11-26 17:40:37.948763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:37.433 [2024-11-26 17:40:37.949067] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:43:37.433 [2024-11-26 17:40:37.949096] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:43:37.433 [2024-11-26 17:40:37.949141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:37.433 [2024-11-26 17:40:37.965602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:43:37.433 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.433 17:40:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:43:37.433 [2024-11-26 17:40:37.967983] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:38.371 17:40:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:38.371 17:40:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:38.371 17:40:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:38.371 17:40:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:38.371 17:40:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:38.371 17:40:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:38.371 17:40:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:38.371 17:40:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:38.371 17:40:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:38.371 17:40:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:38.371 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:38.371 "name": "raid_bdev1", 00:43:38.371 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:38.371 "strip_size_kb": 0, 00:43:38.371 "state": "online", 00:43:38.371 "raid_level": "raid1", 00:43:38.371 "superblock": true, 00:43:38.371 "num_base_bdevs": 2, 00:43:38.371 "num_base_bdevs_discovered": 2, 00:43:38.371 "num_base_bdevs_operational": 2, 00:43:38.371 "process": { 00:43:38.371 "type": "rebuild", 00:43:38.371 "target": "spare", 00:43:38.371 "progress": { 00:43:38.371 "blocks": 2560, 00:43:38.371 "percent": 32 00:43:38.371 } 00:43:38.371 }, 00:43:38.371 "base_bdevs_list": [ 00:43:38.371 { 00:43:38.371 "name": "spare", 00:43:38.371 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:38.371 "is_configured": true, 00:43:38.371 "data_offset": 256, 00:43:38.371 "data_size": 7936 00:43:38.371 }, 00:43:38.371 { 00:43:38.371 "name": "BaseBdev2", 00:43:38.371 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:38.371 "is_configured": true, 00:43:38.371 "data_offset": 256, 00:43:38.371 "data_size": 7936 00:43:38.371 } 00:43:38.371 ] 00:43:38.372 }' 00:43:38.372 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:38.631 [2024-11-26 17:40:39.120785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:38.631 [2024-11-26 17:40:39.178459] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:38.631 [2024-11-26 17:40:39.178575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:38.631 [2024-11-26 17:40:39.178596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:38.631 [2024-11-26 17:40:39.178624] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:38.631 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:38.631 "name": "raid_bdev1", 00:43:38.631 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:38.631 "strip_size_kb": 0, 00:43:38.631 "state": "online", 00:43:38.631 "raid_level": "raid1", 00:43:38.631 "superblock": true, 00:43:38.631 "num_base_bdevs": 2, 00:43:38.631 "num_base_bdevs_discovered": 1, 00:43:38.631 "num_base_bdevs_operational": 1, 00:43:38.631 "base_bdevs_list": [ 00:43:38.631 { 00:43:38.631 "name": null, 00:43:38.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:38.631 "is_configured": false, 00:43:38.631 "data_offset": 0, 00:43:38.631 "data_size": 7936 00:43:38.631 }, 00:43:38.631 { 00:43:38.631 "name": "BaseBdev2", 00:43:38.631 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:38.631 "is_configured": true, 00:43:38.632 "data_offset": 256, 00:43:38.632 "data_size": 7936 00:43:38.632 } 00:43:38.632 ] 00:43:38.632 }' 00:43:38.632 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:38.632 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:39.198 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:39.198 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.198 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:39.198 [2024-11-26 17:40:39.704743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:39.198 [2024-11-26 17:40:39.704850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:39.198 [2024-11-26 17:40:39.704885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:43:39.198 [2024-11-26 17:40:39.704900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:39.198 [2024-11-26 17:40:39.705271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:39.198 [2024-11-26 17:40:39.705299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:39.198 [2024-11-26 17:40:39.705381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:43:39.198 [2024-11-26 17:40:39.705407] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:43:39.198 [2024-11-26 17:40:39.705420] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:43:39.198 [2024-11-26 17:40:39.705447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:39.198 [2024-11-26 17:40:39.723669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:43:39.198 spare 00:43:39.198 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.198 17:40:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:43:39.198 [2024-11-26 17:40:39.726201] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:40.133 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:40.133 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:40.133 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:43:40.133 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:43:40.133 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:40.133 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:40.133 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:40.133 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.133 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:40.134 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.134 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:40.134 "name": "raid_bdev1", 00:43:40.134 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:40.134 "strip_size_kb": 0, 00:43:40.134 "state": "online", 00:43:40.134 "raid_level": "raid1", 00:43:40.134 "superblock": true, 00:43:40.134 "num_base_bdevs": 2, 00:43:40.134 "num_base_bdevs_discovered": 2, 00:43:40.134 "num_base_bdevs_operational": 2, 00:43:40.134 "process": { 00:43:40.134 "type": "rebuild", 00:43:40.134 "target": "spare", 00:43:40.134 "progress": { 00:43:40.134 "blocks": 2560, 00:43:40.134 "percent": 32 00:43:40.134 } 00:43:40.134 }, 00:43:40.134 "base_bdevs_list": [ 00:43:40.134 { 00:43:40.134 "name": "spare", 00:43:40.134 "uuid": "35fd9410-2764-55ae-b45c-2c2506548c1b", 00:43:40.134 "is_configured": true, 00:43:40.134 "data_offset": 256, 00:43:40.134 "data_size": 7936 00:43:40.134 }, 00:43:40.134 { 00:43:40.134 "name": "BaseBdev2", 00:43:40.134 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:40.134 "is_configured": true, 00:43:40.134 "data_offset": 256, 00:43:40.134 "data_size": 7936 00:43:40.134 } 00:43:40.134 ] 00:43:40.134 }' 00:43:40.134 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:40.393 [2024-11-26 17:40:40.887395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:40.393 [2024-11-26 17:40:40.936165] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:40.393 [2024-11-26 17:40:40.936252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:40.393 [2024-11-26 17:40:40.936275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:40.393 [2024-11-26 17:40:40.936285] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:40.393 17:40:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.393 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:40.393 "name": "raid_bdev1", 00:43:40.393 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:40.393 "strip_size_kb": 0, 00:43:40.393 "state": "online", 00:43:40.393 "raid_level": "raid1", 00:43:40.393 "superblock": true, 00:43:40.393 "num_base_bdevs": 2, 00:43:40.393 "num_base_bdevs_discovered": 1, 00:43:40.393 "num_base_bdevs_operational": 1, 00:43:40.393 "base_bdevs_list": [ 00:43:40.393 { 00:43:40.393 "name": null, 00:43:40.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:40.393 "is_configured": false, 00:43:40.393 "data_offset": 0, 00:43:40.393 "data_size": 7936 00:43:40.393 }, 00:43:40.393 { 00:43:40.393 "name": "BaseBdev2", 00:43:40.393 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:40.393 "is_configured": true, 00:43:40.393 "data_offset": 256, 00:43:40.393 "data_size": 7936 00:43:40.393 } 00:43:40.393 ] 00:43:40.393 }' 00:43:40.393 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:40.393 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:40.961 "name": "raid_bdev1", 00:43:40.961 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:40.961 "strip_size_kb": 0, 00:43:40.961 "state": "online", 00:43:40.961 "raid_level": "raid1", 00:43:40.961 "superblock": true, 00:43:40.961 "num_base_bdevs": 2, 00:43:40.961 "num_base_bdevs_discovered": 1, 00:43:40.961 "num_base_bdevs_operational": 1, 00:43:40.961 "base_bdevs_list": [ 00:43:40.961 { 00:43:40.961 "name": null, 00:43:40.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:40.961 "is_configured": false, 00:43:40.961 "data_offset": 0, 00:43:40.961 "data_size": 7936 00:43:40.961 }, 00:43:40.961 { 00:43:40.961 "name": "BaseBdev2", 00:43:40.961 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:40.961 "is_configured": true, 00:43:40.961 "data_offset": 256, 00:43:40.961 "data_size": 7936 00:43:40.961 } 00:43:40.961 ] 00:43:40.961 }' 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.961 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:43:40.962 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.962 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:40.962 [2024-11-26 17:40:41.577639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:43:40.962 [2024-11-26 17:40:41.577712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:40.962 [2024-11-26 17:40:41.577739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:43:40.962 [2024-11-26 17:40:41.577751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:40.962 [2024-11-26 17:40:41.578060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:40.962 [2024-11-26 17:40:41.578082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:43:40.962 [2024-11-26 17:40:41.578146] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:43:40.962 [2024-11-26 17:40:41.578165] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:43:40.962 [2024-11-26 17:40:41.578179] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:43:40.962 [2024-11-26 17:40:41.578193] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:43:40.962 BaseBdev1 00:43:40.962 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.962 17:40:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:43:41.897 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:41.897 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:41.897 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:41.897 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:41.897 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:41.897 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:41.897 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:41.897 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:41.897 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:41.897 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:42.156 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:42.156 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:42.156 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.156 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:42.156 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:42.156 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:42.156 "name": "raid_bdev1", 00:43:42.156 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:42.156 "strip_size_kb": 0, 00:43:42.156 "state": "online", 00:43:42.156 "raid_level": "raid1", 00:43:42.156 "superblock": true, 00:43:42.156 "num_base_bdevs": 2, 00:43:42.156 "num_base_bdevs_discovered": 1, 00:43:42.156 "num_base_bdevs_operational": 1, 00:43:42.156 "base_bdevs_list": [ 00:43:42.156 { 00:43:42.156 "name": null, 00:43:42.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:42.156 "is_configured": false, 00:43:42.156 "data_offset": 0, 00:43:42.156 "data_size": 7936 00:43:42.156 }, 00:43:42.156 { 00:43:42.156 "name": "BaseBdev2", 00:43:42.156 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:42.156 "is_configured": true, 00:43:42.156 "data_offset": 256, 00:43:42.156 "data_size": 7936 00:43:42.156 } 00:43:42.156 ] 00:43:42.156 }' 00:43:42.156 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:42.156 17:40:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:42.414 "name": "raid_bdev1", 00:43:42.414 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:42.414 "strip_size_kb": 0, 00:43:42.414 "state": "online", 00:43:42.414 "raid_level": "raid1", 00:43:42.414 "superblock": true, 00:43:42.414 "num_base_bdevs": 2, 00:43:42.414 "num_base_bdevs_discovered": 1, 00:43:42.414 "num_base_bdevs_operational": 1, 00:43:42.414 "base_bdevs_list": [ 00:43:42.414 { 00:43:42.414 "name": null, 00:43:42.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:42.414 "is_configured": false, 00:43:42.414 "data_offset": 0, 00:43:42.414 "data_size": 7936 00:43:42.414 }, 00:43:42.414 { 00:43:42.414 "name": "BaseBdev2", 00:43:42.414 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:42.414 "is_configured": true, 00:43:42.414 "data_offset": 256, 00:43:42.414 "data_size": 7936 00:43:42.414 } 00:43:42.414 ] 00:43:42.414 }' 00:43:42.414 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:42.672 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:42.672 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:42.673 [2024-11-26 17:40:43.192742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:42.673 [2024-11-26 17:40:43.193000] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:43:42.673 [2024-11-26 17:40:43.193018] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:43:42.673 request: 00:43:42.673 { 00:43:42.673 "base_bdev": "BaseBdev1", 00:43:42.673 "raid_bdev": "raid_bdev1", 00:43:42.673 "method": "bdev_raid_add_base_bdev", 00:43:42.673 "req_id": 1 00:43:42.673 } 00:43:42.673 Got JSON-RPC error response 00:43:42.673 response: 00:43:42.673 { 00:43:42.673 "code": -22, 00:43:42.673 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:43:42.673 } 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:42.673 17:40:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:43.609 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:43.609 "name": "raid_bdev1", 00:43:43.609 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:43.609 "strip_size_kb": 0, 00:43:43.609 "state": "online", 00:43:43.609 "raid_level": "raid1", 00:43:43.609 "superblock": true, 00:43:43.609 "num_base_bdevs": 2, 00:43:43.609 "num_base_bdevs_discovered": 1, 00:43:43.609 "num_base_bdevs_operational": 1, 00:43:43.609 "base_bdevs_list": [ 00:43:43.609 { 00:43:43.609 "name": null, 00:43:43.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:43.609 "is_configured": false, 00:43:43.609 "data_offset": 0, 00:43:43.609 "data_size": 7936 00:43:43.609 }, 00:43:43.609 { 00:43:43.609 "name": "BaseBdev2", 00:43:43.609 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:43.610 "is_configured": true, 00:43:43.610 "data_offset": 256, 00:43:43.610 "data_size": 7936 00:43:43.610 } 00:43:43.610 ] 00:43:43.610 }' 00:43:43.610 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:43.610 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:44.177 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:44.177 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:43:44.178 "name": "raid_bdev1", 00:43:44.178 "uuid": "45b5027f-c696-455d-9413-b0b59e1823fa", 00:43:44.178 "strip_size_kb": 0, 00:43:44.178 "state": "online", 00:43:44.178 "raid_level": "raid1", 00:43:44.178 "superblock": true, 00:43:44.178 "num_base_bdevs": 2, 00:43:44.178 "num_base_bdevs_discovered": 1, 00:43:44.178 "num_base_bdevs_operational": 1, 00:43:44.178 "base_bdevs_list": [ 00:43:44.178 { 00:43:44.178 "name": null, 00:43:44.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:44.178 "is_configured": false, 00:43:44.178 "data_offset": 0, 00:43:44.178 "data_size": 7936 00:43:44.178 }, 00:43:44.178 { 00:43:44.178 "name": "BaseBdev2", 00:43:44.178 "uuid": "86eadb8d-1e88-52f5-a3a8-58503ef3f13e", 00:43:44.178 "is_configured": true, 00:43:44.178 "data_offset": 256, 00:43:44.178 "data_size": 7936 00:43:44.178 } 00:43:44.178 ] 00:43:44.178 }' 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88118 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88118 ']' 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88118 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88118 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:44.178 killing process with pid 88118 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88118' 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88118 00:43:44.178 Received shutdown signal, test time was about 60.000000 seconds 00:43:44.178 00:43:44.178 Latency(us) 00:43:44.178 [2024-11-26T17:40:44.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:44.178 [2024-11-26T17:40:44.873Z] =================================================================================================================== 00:43:44.178 [2024-11-26T17:40:44.873Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:44.178 [2024-11-26 17:40:44.853785] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:44.178 17:40:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88118 00:43:44.178 [2024-11-26 17:40:44.853970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:44.178 [2024-11-26 17:40:44.854043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:44.178 [2024-11-26 17:40:44.854059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:43:44.745 [2024-11-26 17:40:45.287631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:46.121 17:40:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:43:46.121 00:43:46.121 real 0m20.874s 00:43:46.121 user 0m27.033s 00:43:46.121 sys 0m2.970s 00:43:46.121 17:40:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:46.121 17:40:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:43:46.121 ************************************ 00:43:46.121 END TEST raid_rebuild_test_sb_md_separate 00:43:46.121 ************************************ 00:43:46.380 17:40:46 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:43:46.380 17:40:46 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:43:46.380 17:40:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:43:46.380 17:40:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:46.380 17:40:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:46.380 ************************************ 00:43:46.380 START TEST raid_state_function_test_sb_md_interleaved 00:43:46.380 ************************************ 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88816 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88816' 00:43:46.380 Process raid pid: 88816 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88816 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88816 ']' 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:46.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:46.380 17:40:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:46.380 [2024-11-26 17:40:46.951928] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:46.380 [2024-11-26 17:40:46.952626] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:46.640 [2024-11-26 17:40:47.138033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:46.640 [2024-11-26 17:40:47.293524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:46.899 [2024-11-26 17:40:47.536930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:46.899 [2024-11-26 17:40:47.536982] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:47.159 [2024-11-26 17:40:47.790674] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:47.159 [2024-11-26 17:40:47.790741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:47.159 [2024-11-26 17:40:47.790751] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:47.159 [2024-11-26 17:40:47.790762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:47.159 "name": "Existed_Raid", 00:43:47.159 "uuid": "eb6af865-0563-43b9-b42f-5cc2e461e661", 00:43:47.159 "strip_size_kb": 0, 00:43:47.159 "state": "configuring", 00:43:47.159 "raid_level": "raid1", 00:43:47.159 "superblock": true, 00:43:47.159 "num_base_bdevs": 2, 00:43:47.159 "num_base_bdevs_discovered": 0, 00:43:47.159 "num_base_bdevs_operational": 2, 00:43:47.159 "base_bdevs_list": [ 00:43:47.159 { 00:43:47.159 "name": "BaseBdev1", 00:43:47.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:47.159 "is_configured": false, 00:43:47.159 "data_offset": 0, 00:43:47.159 "data_size": 0 00:43:47.159 }, 00:43:47.159 { 00:43:47.159 "name": "BaseBdev2", 00:43:47.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:47.159 "is_configured": false, 00:43:47.159 "data_offset": 0, 00:43:47.159 "data_size": 0 00:43:47.159 } 00:43:47.159 ] 00:43:47.159 }' 00:43:47.159 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:47.160 17:40:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:47.755 [2024-11-26 17:40:48.297779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:47.755 [2024-11-26 17:40:48.297855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:47.755 [2024-11-26 17:40:48.305716] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:47.755 [2024-11-26 17:40:48.305766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:47.755 [2024-11-26 17:40:48.305777] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:47.755 [2024-11-26 17:40:48.305791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:47.755 [2024-11-26 17:40:48.357227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:47.755 BaseBdev1 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:47.755 [ 00:43:47.755 { 00:43:47.755 "name": "BaseBdev1", 00:43:47.755 "aliases": [ 00:43:47.755 "b561fdf7-751d-4569-bd5d-eed4fb7c0860" 00:43:47.755 ], 00:43:47.755 "product_name": "Malloc disk", 00:43:47.755 "block_size": 4128, 00:43:47.755 "num_blocks": 8192, 00:43:47.755 "uuid": "b561fdf7-751d-4569-bd5d-eed4fb7c0860", 00:43:47.755 "md_size": 32, 00:43:47.755 "md_interleave": true, 00:43:47.755 "dif_type": 0, 00:43:47.755 "assigned_rate_limits": { 00:43:47.755 "rw_ios_per_sec": 0, 00:43:47.755 "rw_mbytes_per_sec": 0, 00:43:47.755 "r_mbytes_per_sec": 0, 00:43:47.755 "w_mbytes_per_sec": 0 00:43:47.755 }, 00:43:47.755 "claimed": true, 00:43:47.755 "claim_type": "exclusive_write", 00:43:47.755 "zoned": false, 00:43:47.755 "supported_io_types": { 00:43:47.755 "read": true, 00:43:47.755 "write": true, 00:43:47.755 "unmap": true, 00:43:47.755 "flush": true, 00:43:47.755 "reset": true, 00:43:47.755 "nvme_admin": false, 00:43:47.755 "nvme_io": false, 00:43:47.755 "nvme_io_md": false, 00:43:47.755 "write_zeroes": true, 00:43:47.755 "zcopy": true, 00:43:47.755 "get_zone_info": false, 00:43:47.755 "zone_management": false, 00:43:47.755 "zone_append": false, 00:43:47.755 "compare": false, 00:43:47.755 "compare_and_write": false, 00:43:47.755 "abort": true, 00:43:47.755 "seek_hole": false, 00:43:47.755 "seek_data": false, 00:43:47.755 "copy": true, 00:43:47.755 "nvme_iov_md": false 00:43:47.755 }, 00:43:47.755 "memory_domains": [ 00:43:47.755 { 00:43:47.755 "dma_device_id": "system", 00:43:47.755 "dma_device_type": 1 00:43:47.755 }, 00:43:47.755 { 00:43:47.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:47.755 "dma_device_type": 2 00:43:47.755 } 00:43:47.755 ], 00:43:47.755 "driver_specific": {} 00:43:47.755 } 00:43:47.755 ] 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:47.755 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:47.756 "name": "Existed_Raid", 00:43:47.756 "uuid": "98900102-18ba-4658-9c71-d3ffc461568d", 00:43:47.756 "strip_size_kb": 0, 00:43:47.756 "state": "configuring", 00:43:47.756 "raid_level": "raid1", 00:43:47.756 "superblock": true, 00:43:47.756 "num_base_bdevs": 2, 00:43:47.756 "num_base_bdevs_discovered": 1, 00:43:47.756 "num_base_bdevs_operational": 2, 00:43:47.756 "base_bdevs_list": [ 00:43:47.756 { 00:43:47.756 "name": "BaseBdev1", 00:43:47.756 "uuid": "b561fdf7-751d-4569-bd5d-eed4fb7c0860", 00:43:47.756 "is_configured": true, 00:43:47.756 "data_offset": 256, 00:43:47.756 "data_size": 7936 00:43:47.756 }, 00:43:47.756 { 00:43:47.756 "name": "BaseBdev2", 00:43:47.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:47.756 "is_configured": false, 00:43:47.756 "data_offset": 0, 00:43:47.756 "data_size": 0 00:43:47.756 } 00:43:47.756 ] 00:43:47.756 }' 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:47.756 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:48.325 [2024-11-26 17:40:48.844676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:48.325 [2024-11-26 17:40:48.844749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:48.325 [2024-11-26 17:40:48.852719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:48.325 [2024-11-26 17:40:48.854832] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:48.325 [2024-11-26 17:40:48.854876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:48.325 "name": "Existed_Raid", 00:43:48.325 "uuid": "2e3727e4-82ef-46b6-a6bc-6e63791754ca", 00:43:48.325 "strip_size_kb": 0, 00:43:48.325 "state": "configuring", 00:43:48.325 "raid_level": "raid1", 00:43:48.325 "superblock": true, 00:43:48.325 "num_base_bdevs": 2, 00:43:48.325 "num_base_bdevs_discovered": 1, 00:43:48.325 "num_base_bdevs_operational": 2, 00:43:48.325 "base_bdevs_list": [ 00:43:48.325 { 00:43:48.325 "name": "BaseBdev1", 00:43:48.325 "uuid": "b561fdf7-751d-4569-bd5d-eed4fb7c0860", 00:43:48.325 "is_configured": true, 00:43:48.325 "data_offset": 256, 00:43:48.325 "data_size": 7936 00:43:48.325 }, 00:43:48.325 { 00:43:48.325 "name": "BaseBdev2", 00:43:48.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:48.325 "is_configured": false, 00:43:48.325 "data_offset": 0, 00:43:48.325 "data_size": 0 00:43:48.325 } 00:43:48.325 ] 00:43:48.325 }' 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:48.325 17:40:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:48.894 [2024-11-26 17:40:49.370927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:48.894 [2024-11-26 17:40:49.371193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:43:48.894 [2024-11-26 17:40:49.371208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:48.894 [2024-11-26 17:40:49.371304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:43:48.894 [2024-11-26 17:40:49.371403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:43:48.894 [2024-11-26 17:40:49.371415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:43:48.894 [2024-11-26 17:40:49.371487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:48.894 BaseBdev2 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:48.894 [ 00:43:48.894 { 00:43:48.894 "name": "BaseBdev2", 00:43:48.894 "aliases": [ 00:43:48.894 "8d59aad7-9eb7-4d37-8965-718b0ccc3ef1" 00:43:48.894 ], 00:43:48.894 "product_name": "Malloc disk", 00:43:48.894 "block_size": 4128, 00:43:48.894 "num_blocks": 8192, 00:43:48.894 "uuid": "8d59aad7-9eb7-4d37-8965-718b0ccc3ef1", 00:43:48.894 "md_size": 32, 00:43:48.894 "md_interleave": true, 00:43:48.894 "dif_type": 0, 00:43:48.894 "assigned_rate_limits": { 00:43:48.894 "rw_ios_per_sec": 0, 00:43:48.894 "rw_mbytes_per_sec": 0, 00:43:48.894 "r_mbytes_per_sec": 0, 00:43:48.894 "w_mbytes_per_sec": 0 00:43:48.894 }, 00:43:48.894 "claimed": true, 00:43:48.894 "claim_type": "exclusive_write", 00:43:48.894 "zoned": false, 00:43:48.894 "supported_io_types": { 00:43:48.894 "read": true, 00:43:48.894 "write": true, 00:43:48.894 "unmap": true, 00:43:48.894 "flush": true, 00:43:48.894 "reset": true, 00:43:48.894 "nvme_admin": false, 00:43:48.894 "nvme_io": false, 00:43:48.894 "nvme_io_md": false, 00:43:48.894 "write_zeroes": true, 00:43:48.894 "zcopy": true, 00:43:48.894 "get_zone_info": false, 00:43:48.894 "zone_management": false, 00:43:48.894 "zone_append": false, 00:43:48.894 "compare": false, 00:43:48.894 "compare_and_write": false, 00:43:48.894 "abort": true, 00:43:48.894 "seek_hole": false, 00:43:48.894 "seek_data": false, 00:43:48.894 "copy": true, 00:43:48.894 "nvme_iov_md": false 00:43:48.894 }, 00:43:48.894 "memory_domains": [ 00:43:48.894 { 00:43:48.894 "dma_device_id": "system", 00:43:48.894 "dma_device_type": 1 00:43:48.894 }, 00:43:48.894 { 00:43:48.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:48.894 "dma_device_type": 2 00:43:48.894 } 00:43:48.894 ], 00:43:48.894 "driver_specific": {} 00:43:48.894 } 00:43:48.894 ] 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:48.894 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:48.895 "name": "Existed_Raid", 00:43:48.895 "uuid": "2e3727e4-82ef-46b6-a6bc-6e63791754ca", 00:43:48.895 "strip_size_kb": 0, 00:43:48.895 "state": "online", 00:43:48.895 "raid_level": "raid1", 00:43:48.895 "superblock": true, 00:43:48.895 "num_base_bdevs": 2, 00:43:48.895 "num_base_bdevs_discovered": 2, 00:43:48.895 "num_base_bdevs_operational": 2, 00:43:48.895 "base_bdevs_list": [ 00:43:48.895 { 00:43:48.895 "name": "BaseBdev1", 00:43:48.895 "uuid": "b561fdf7-751d-4569-bd5d-eed4fb7c0860", 00:43:48.895 "is_configured": true, 00:43:48.895 "data_offset": 256, 00:43:48.895 "data_size": 7936 00:43:48.895 }, 00:43:48.895 { 00:43:48.895 "name": "BaseBdev2", 00:43:48.895 "uuid": "8d59aad7-9eb7-4d37-8965-718b0ccc3ef1", 00:43:48.895 "is_configured": true, 00:43:48.895 "data_offset": 256, 00:43:48.895 "data_size": 7936 00:43:48.895 } 00:43:48.895 ] 00:43:48.895 }' 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:48.895 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:49.465 [2024-11-26 17:40:49.922402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:49.465 "name": "Existed_Raid", 00:43:49.465 "aliases": [ 00:43:49.465 "2e3727e4-82ef-46b6-a6bc-6e63791754ca" 00:43:49.465 ], 00:43:49.465 "product_name": "Raid Volume", 00:43:49.465 "block_size": 4128, 00:43:49.465 "num_blocks": 7936, 00:43:49.465 "uuid": "2e3727e4-82ef-46b6-a6bc-6e63791754ca", 00:43:49.465 "md_size": 32, 00:43:49.465 "md_interleave": true, 00:43:49.465 "dif_type": 0, 00:43:49.465 "assigned_rate_limits": { 00:43:49.465 "rw_ios_per_sec": 0, 00:43:49.465 "rw_mbytes_per_sec": 0, 00:43:49.465 "r_mbytes_per_sec": 0, 00:43:49.465 "w_mbytes_per_sec": 0 00:43:49.465 }, 00:43:49.465 "claimed": false, 00:43:49.465 "zoned": false, 00:43:49.465 "supported_io_types": { 00:43:49.465 "read": true, 00:43:49.465 "write": true, 00:43:49.465 "unmap": false, 00:43:49.465 "flush": false, 00:43:49.465 "reset": true, 00:43:49.465 "nvme_admin": false, 00:43:49.465 "nvme_io": false, 00:43:49.465 "nvme_io_md": false, 00:43:49.465 "write_zeroes": true, 00:43:49.465 "zcopy": false, 00:43:49.465 "get_zone_info": false, 00:43:49.465 "zone_management": false, 00:43:49.465 "zone_append": false, 00:43:49.465 "compare": false, 00:43:49.465 "compare_and_write": false, 00:43:49.465 "abort": false, 00:43:49.465 "seek_hole": false, 00:43:49.465 "seek_data": false, 00:43:49.465 "copy": false, 00:43:49.465 "nvme_iov_md": false 00:43:49.465 }, 00:43:49.465 "memory_domains": [ 00:43:49.465 { 00:43:49.465 "dma_device_id": "system", 00:43:49.465 "dma_device_type": 1 00:43:49.465 }, 00:43:49.465 { 00:43:49.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:49.465 "dma_device_type": 2 00:43:49.465 }, 00:43:49.465 { 00:43:49.465 "dma_device_id": "system", 00:43:49.465 "dma_device_type": 1 00:43:49.465 }, 00:43:49.465 { 00:43:49.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:49.465 "dma_device_type": 2 00:43:49.465 } 00:43:49.465 ], 00:43:49.465 "driver_specific": { 00:43:49.465 "raid": { 00:43:49.465 "uuid": "2e3727e4-82ef-46b6-a6bc-6e63791754ca", 00:43:49.465 "strip_size_kb": 0, 00:43:49.465 "state": "online", 00:43:49.465 "raid_level": "raid1", 00:43:49.465 "superblock": true, 00:43:49.465 "num_base_bdevs": 2, 00:43:49.465 "num_base_bdevs_discovered": 2, 00:43:49.465 "num_base_bdevs_operational": 2, 00:43:49.465 "base_bdevs_list": [ 00:43:49.465 { 00:43:49.465 "name": "BaseBdev1", 00:43:49.465 "uuid": "b561fdf7-751d-4569-bd5d-eed4fb7c0860", 00:43:49.465 "is_configured": true, 00:43:49.465 "data_offset": 256, 00:43:49.465 "data_size": 7936 00:43:49.465 }, 00:43:49.465 { 00:43:49.465 "name": "BaseBdev2", 00:43:49.465 "uuid": "8d59aad7-9eb7-4d37-8965-718b0ccc3ef1", 00:43:49.465 "is_configured": true, 00:43:49.465 "data_offset": 256, 00:43:49.465 "data_size": 7936 00:43:49.465 } 00:43:49.465 ] 00:43:49.465 } 00:43:49.465 } 00:43:49.465 }' 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:43:49.465 BaseBdev2' 00:43:49.465 17:40:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:49.465 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:49.465 [2024-11-26 17:40:50.141706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:49.725 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:49.726 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:49.726 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:49.726 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:49.726 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:49.726 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:49.726 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:49.726 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:49.726 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:49.726 "name": "Existed_Raid", 00:43:49.726 "uuid": "2e3727e4-82ef-46b6-a6bc-6e63791754ca", 00:43:49.726 "strip_size_kb": 0, 00:43:49.726 "state": "online", 00:43:49.726 "raid_level": "raid1", 00:43:49.726 "superblock": true, 00:43:49.726 "num_base_bdevs": 2, 00:43:49.726 "num_base_bdevs_discovered": 1, 00:43:49.726 "num_base_bdevs_operational": 1, 00:43:49.726 "base_bdevs_list": [ 00:43:49.726 { 00:43:49.726 "name": null, 00:43:49.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:49.726 "is_configured": false, 00:43:49.726 "data_offset": 0, 00:43:49.726 "data_size": 7936 00:43:49.726 }, 00:43:49.726 { 00:43:49.726 "name": "BaseBdev2", 00:43:49.726 "uuid": "8d59aad7-9eb7-4d37-8965-718b0ccc3ef1", 00:43:49.726 "is_configured": true, 00:43:49.726 "data_offset": 256, 00:43:49.726 "data_size": 7936 00:43:49.726 } 00:43:49.726 ] 00:43:49.726 }' 00:43:49.726 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:49.726 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:50.294 [2024-11-26 17:40:50.780707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:43:50.294 [2024-11-26 17:40:50.780847] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:50.294 [2024-11-26 17:40:50.889999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:50.294 [2024-11-26 17:40:50.890067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:50.294 [2024-11-26 17:40:50.890083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88816 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88816 ']' 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88816 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:50.294 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88816 00:43:50.553 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:50.553 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:50.554 killing process with pid 88816 00:43:50.554 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88816' 00:43:50.554 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88816 00:43:50.554 [2024-11-26 17:40:50.992870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:50.554 17:40:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88816 00:43:50.554 [2024-11-26 17:40:51.011608] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:51.934 17:40:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:43:51.934 00:43:51.934 real 0m5.455s 00:43:51.934 user 0m7.711s 00:43:51.934 sys 0m1.032s 00:43:51.934 17:40:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:51.934 17:40:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:51.934 ************************************ 00:43:51.934 END TEST raid_state_function_test_sb_md_interleaved 00:43:51.934 ************************************ 00:43:51.935 17:40:52 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:43:51.935 17:40:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:51.935 17:40:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:51.935 17:40:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:51.935 ************************************ 00:43:51.935 START TEST raid_superblock_test_md_interleaved 00:43:51.935 ************************************ 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89070 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89070 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89070 ']' 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:51.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:51.935 17:40:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:51.935 [2024-11-26 17:40:52.469947] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:51.935 [2024-11-26 17:40:52.470599] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89070 ] 00:43:52.194 [2024-11-26 17:40:52.649244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:52.194 [2024-11-26 17:40:52.832134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:52.454 [2024-11-26 17:40:53.105960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:52.454 [2024-11-26 17:40:53.106049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.713 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:52.973 malloc1 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:52.973 [2024-11-26 17:40:53.450493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:52.973 [2024-11-26 17:40:53.450579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:52.973 [2024-11-26 17:40:53.450607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:43:52.973 [2024-11-26 17:40:53.450618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:52.973 [2024-11-26 17:40:53.452904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:52.973 [2024-11-26 17:40:53.452940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:52.973 pt1 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.973 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:52.973 malloc2 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:52.974 [2024-11-26 17:40:53.513770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:52.974 [2024-11-26 17:40:53.513843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:52.974 [2024-11-26 17:40:53.513870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:43:52.974 [2024-11-26 17:40:53.513880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:52.974 [2024-11-26 17:40:53.516201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:52.974 [2024-11-26 17:40:53.516296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:52.974 pt2 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:52.974 [2024-11-26 17:40:53.525815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:52.974 [2024-11-26 17:40:53.528127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:52.974 [2024-11-26 17:40:53.528355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:43:52.974 [2024-11-26 17:40:53.528370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:52.974 [2024-11-26 17:40:53.528472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:43:52.974 [2024-11-26 17:40:53.528575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:43:52.974 [2024-11-26 17:40:53.528590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:43:52.974 [2024-11-26 17:40:53.528677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:52.974 "name": "raid_bdev1", 00:43:52.974 "uuid": "d51f9920-1381-430a-823f-ef1c7d200eb5", 00:43:52.974 "strip_size_kb": 0, 00:43:52.974 "state": "online", 00:43:52.974 "raid_level": "raid1", 00:43:52.974 "superblock": true, 00:43:52.974 "num_base_bdevs": 2, 00:43:52.974 "num_base_bdevs_discovered": 2, 00:43:52.974 "num_base_bdevs_operational": 2, 00:43:52.974 "base_bdevs_list": [ 00:43:52.974 { 00:43:52.974 "name": "pt1", 00:43:52.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:52.974 "is_configured": true, 00:43:52.974 "data_offset": 256, 00:43:52.974 "data_size": 7936 00:43:52.974 }, 00:43:52.974 { 00:43:52.974 "name": "pt2", 00:43:52.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:52.974 "is_configured": true, 00:43:52.974 "data_offset": 256, 00:43:52.974 "data_size": 7936 00:43:52.974 } 00:43:52.974 ] 00:43:52.974 }' 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:52.974 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:53.545 [2024-11-26 17:40:53.985627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:53.545 17:40:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.545 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:53.545 "name": "raid_bdev1", 00:43:53.545 "aliases": [ 00:43:53.545 "d51f9920-1381-430a-823f-ef1c7d200eb5" 00:43:53.545 ], 00:43:53.545 "product_name": "Raid Volume", 00:43:53.545 "block_size": 4128, 00:43:53.545 "num_blocks": 7936, 00:43:53.545 "uuid": "d51f9920-1381-430a-823f-ef1c7d200eb5", 00:43:53.545 "md_size": 32, 00:43:53.545 "md_interleave": true, 00:43:53.545 "dif_type": 0, 00:43:53.545 "assigned_rate_limits": { 00:43:53.545 "rw_ios_per_sec": 0, 00:43:53.545 "rw_mbytes_per_sec": 0, 00:43:53.545 "r_mbytes_per_sec": 0, 00:43:53.545 "w_mbytes_per_sec": 0 00:43:53.545 }, 00:43:53.545 "claimed": false, 00:43:53.545 "zoned": false, 00:43:53.545 "supported_io_types": { 00:43:53.545 "read": true, 00:43:53.545 "write": true, 00:43:53.545 "unmap": false, 00:43:53.545 "flush": false, 00:43:53.545 "reset": true, 00:43:53.545 "nvme_admin": false, 00:43:53.545 "nvme_io": false, 00:43:53.545 "nvme_io_md": false, 00:43:53.545 "write_zeroes": true, 00:43:53.545 "zcopy": false, 00:43:53.545 "get_zone_info": false, 00:43:53.545 "zone_management": false, 00:43:53.545 "zone_append": false, 00:43:53.545 "compare": false, 00:43:53.545 "compare_and_write": false, 00:43:53.545 "abort": false, 00:43:53.545 "seek_hole": false, 00:43:53.545 "seek_data": false, 00:43:53.545 "copy": false, 00:43:53.545 "nvme_iov_md": false 00:43:53.545 }, 00:43:53.545 "memory_domains": [ 00:43:53.545 { 00:43:53.545 "dma_device_id": "system", 00:43:53.545 "dma_device_type": 1 00:43:53.545 }, 00:43:53.545 { 00:43:53.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:53.545 "dma_device_type": 2 00:43:53.545 }, 00:43:53.545 { 00:43:53.545 "dma_device_id": "system", 00:43:53.545 "dma_device_type": 1 00:43:53.545 }, 00:43:53.545 { 00:43:53.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:53.545 "dma_device_type": 2 00:43:53.545 } 00:43:53.545 ], 00:43:53.545 "driver_specific": { 00:43:53.545 "raid": { 00:43:53.545 "uuid": "d51f9920-1381-430a-823f-ef1c7d200eb5", 00:43:53.545 "strip_size_kb": 0, 00:43:53.545 "state": "online", 00:43:53.545 "raid_level": "raid1", 00:43:53.545 "superblock": true, 00:43:53.545 "num_base_bdevs": 2, 00:43:53.545 "num_base_bdevs_discovered": 2, 00:43:53.545 "num_base_bdevs_operational": 2, 00:43:53.545 "base_bdevs_list": [ 00:43:53.545 { 00:43:53.546 "name": "pt1", 00:43:53.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:53.546 "is_configured": true, 00:43:53.546 "data_offset": 256, 00:43:53.546 "data_size": 7936 00:43:53.546 }, 00:43:53.546 { 00:43:53.546 "name": "pt2", 00:43:53.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:53.546 "is_configured": true, 00:43:53.546 "data_offset": 256, 00:43:53.546 "data_size": 7936 00:43:53.546 } 00:43:53.546 ] 00:43:53.546 } 00:43:53.546 } 00:43:53.546 }' 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:43:53.546 pt2' 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.546 [2024-11-26 17:40:54.201258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:53.546 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d51f9920-1381-430a-823f-ef1c7d200eb5 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z d51f9920-1381-430a-823f-ef1c7d200eb5 ']' 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.895 [2024-11-26 17:40:54.248829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:53.895 [2024-11-26 17:40:54.248969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:53.895 [2024-11-26 17:40:54.249122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:53.895 [2024-11-26 17:40:54.249218] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:53.895 [2024-11-26 17:40:54.249268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:43:53.895 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.896 [2024-11-26 17:40:54.392678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:43:53.896 [2024-11-26 17:40:54.394972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:43:53.896 [2024-11-26 17:40:54.395067] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:43:53.896 [2024-11-26 17:40:54.395136] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:43:53.896 [2024-11-26 17:40:54.395152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:53.896 [2024-11-26 17:40:54.395164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:43:53.896 request: 00:43:53.896 { 00:43:53.896 "name": "raid_bdev1", 00:43:53.896 "raid_level": "raid1", 00:43:53.896 "base_bdevs": [ 00:43:53.896 "malloc1", 00:43:53.896 "malloc2" 00:43:53.896 ], 00:43:53.896 "superblock": false, 00:43:53.896 "method": "bdev_raid_create", 00:43:53.896 "req_id": 1 00:43:53.896 } 00:43:53.896 Got JSON-RPC error response 00:43:53.896 response: 00:43:53.896 { 00:43:53.896 "code": -17, 00:43:53.896 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:43:53.896 } 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.896 [2024-11-26 17:40:54.460580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:53.896 [2024-11-26 17:40:54.460712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:53.896 [2024-11-26 17:40:54.460751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:43:53.896 [2024-11-26 17:40:54.460788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:53.896 [2024-11-26 17:40:54.463134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:53.896 [2024-11-26 17:40:54.463219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:53.896 [2024-11-26 17:40:54.463306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:43:53.896 [2024-11-26 17:40:54.463408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:53.896 pt1 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:53.896 "name": "raid_bdev1", 00:43:53.896 "uuid": "d51f9920-1381-430a-823f-ef1c7d200eb5", 00:43:53.896 "strip_size_kb": 0, 00:43:53.896 "state": "configuring", 00:43:53.896 "raid_level": "raid1", 00:43:53.896 "superblock": true, 00:43:53.896 "num_base_bdevs": 2, 00:43:53.896 "num_base_bdevs_discovered": 1, 00:43:53.896 "num_base_bdevs_operational": 2, 00:43:53.896 "base_bdevs_list": [ 00:43:53.896 { 00:43:53.896 "name": "pt1", 00:43:53.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:53.896 "is_configured": true, 00:43:53.896 "data_offset": 256, 00:43:53.896 "data_size": 7936 00:43:53.896 }, 00:43:53.896 { 00:43:53.896 "name": null, 00:43:53.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:53.896 "is_configured": false, 00:43:53.896 "data_offset": 256, 00:43:53.896 "data_size": 7936 00:43:53.896 } 00:43:53.896 ] 00:43:53.896 }' 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:53.896 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:54.489 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:43:54.489 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:43:54.489 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:43:54.489 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:54.489 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.489 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:54.489 [2024-11-26 17:40:54.931793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:54.489 [2024-11-26 17:40:54.931920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:54.490 [2024-11-26 17:40:54.931948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:43:54.490 [2024-11-26 17:40:54.931960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:54.490 [2024-11-26 17:40:54.932194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:54.490 [2024-11-26 17:40:54.932214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:54.490 [2024-11-26 17:40:54.932282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:43:54.490 [2024-11-26 17:40:54.932311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:54.490 [2024-11-26 17:40:54.932408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:43:54.490 [2024-11-26 17:40:54.932421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:54.490 [2024-11-26 17:40:54.932502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:43:54.490 [2024-11-26 17:40:54.932606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:43:54.490 [2024-11-26 17:40:54.932614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:43:54.490 [2024-11-26 17:40:54.932695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:54.490 pt2 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:54.490 "name": "raid_bdev1", 00:43:54.490 "uuid": "d51f9920-1381-430a-823f-ef1c7d200eb5", 00:43:54.490 "strip_size_kb": 0, 00:43:54.490 "state": "online", 00:43:54.490 "raid_level": "raid1", 00:43:54.490 "superblock": true, 00:43:54.490 "num_base_bdevs": 2, 00:43:54.490 "num_base_bdevs_discovered": 2, 00:43:54.490 "num_base_bdevs_operational": 2, 00:43:54.490 "base_bdevs_list": [ 00:43:54.490 { 00:43:54.490 "name": "pt1", 00:43:54.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:54.490 "is_configured": true, 00:43:54.490 "data_offset": 256, 00:43:54.490 "data_size": 7936 00:43:54.490 }, 00:43:54.490 { 00:43:54.490 "name": "pt2", 00:43:54.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:54.490 "is_configured": true, 00:43:54.490 "data_offset": 256, 00:43:54.490 "data_size": 7936 00:43:54.490 } 00:43:54.490 ] 00:43:54.490 }' 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:54.490 17:40:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:54.751 [2024-11-26 17:40:55.395435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:43:54.751 "name": "raid_bdev1", 00:43:54.751 "aliases": [ 00:43:54.751 "d51f9920-1381-430a-823f-ef1c7d200eb5" 00:43:54.751 ], 00:43:54.751 "product_name": "Raid Volume", 00:43:54.751 "block_size": 4128, 00:43:54.751 "num_blocks": 7936, 00:43:54.751 "uuid": "d51f9920-1381-430a-823f-ef1c7d200eb5", 00:43:54.751 "md_size": 32, 00:43:54.751 "md_interleave": true, 00:43:54.751 "dif_type": 0, 00:43:54.751 "assigned_rate_limits": { 00:43:54.751 "rw_ios_per_sec": 0, 00:43:54.751 "rw_mbytes_per_sec": 0, 00:43:54.751 "r_mbytes_per_sec": 0, 00:43:54.751 "w_mbytes_per_sec": 0 00:43:54.751 }, 00:43:54.751 "claimed": false, 00:43:54.751 "zoned": false, 00:43:54.751 "supported_io_types": { 00:43:54.751 "read": true, 00:43:54.751 "write": true, 00:43:54.751 "unmap": false, 00:43:54.751 "flush": false, 00:43:54.751 "reset": true, 00:43:54.751 "nvme_admin": false, 00:43:54.751 "nvme_io": false, 00:43:54.751 "nvme_io_md": false, 00:43:54.751 "write_zeroes": true, 00:43:54.751 "zcopy": false, 00:43:54.751 "get_zone_info": false, 00:43:54.751 "zone_management": false, 00:43:54.751 "zone_append": false, 00:43:54.751 "compare": false, 00:43:54.751 "compare_and_write": false, 00:43:54.751 "abort": false, 00:43:54.751 "seek_hole": false, 00:43:54.751 "seek_data": false, 00:43:54.751 "copy": false, 00:43:54.751 "nvme_iov_md": false 00:43:54.751 }, 00:43:54.751 "memory_domains": [ 00:43:54.751 { 00:43:54.751 "dma_device_id": "system", 00:43:54.751 "dma_device_type": 1 00:43:54.751 }, 00:43:54.751 { 00:43:54.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:54.751 "dma_device_type": 2 00:43:54.751 }, 00:43:54.751 { 00:43:54.751 "dma_device_id": "system", 00:43:54.751 "dma_device_type": 1 00:43:54.751 }, 00:43:54.751 { 00:43:54.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:54.751 "dma_device_type": 2 00:43:54.751 } 00:43:54.751 ], 00:43:54.751 "driver_specific": { 00:43:54.751 "raid": { 00:43:54.751 "uuid": "d51f9920-1381-430a-823f-ef1c7d200eb5", 00:43:54.751 "strip_size_kb": 0, 00:43:54.751 "state": "online", 00:43:54.751 "raid_level": "raid1", 00:43:54.751 "superblock": true, 00:43:54.751 "num_base_bdevs": 2, 00:43:54.751 "num_base_bdevs_discovered": 2, 00:43:54.751 "num_base_bdevs_operational": 2, 00:43:54.751 "base_bdevs_list": [ 00:43:54.751 { 00:43:54.751 "name": "pt1", 00:43:54.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:43:54.751 "is_configured": true, 00:43:54.751 "data_offset": 256, 00:43:54.751 "data_size": 7936 00:43:54.751 }, 00:43:54.751 { 00:43:54.751 "name": "pt2", 00:43:54.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:54.751 "is_configured": true, 00:43:54.751 "data_offset": 256, 00:43:54.751 "data_size": 7936 00:43:54.751 } 00:43:54.751 ] 00:43:54.751 } 00:43:54.751 } 00:43:54.751 }' 00:43:54.751 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:43:55.011 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:43:55.011 pt2' 00:43:55.011 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:55.011 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:43:55.011 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:55.011 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:43:55.011 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:55.011 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.011 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.011 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.011 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.012 [2024-11-26 17:40:55.631014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' d51f9920-1381-430a-823f-ef1c7d200eb5 '!=' d51f9920-1381-430a-823f-ef1c7d200eb5 ']' 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.012 [2024-11-26 17:40:55.674740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.012 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.272 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.272 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:55.272 "name": "raid_bdev1", 00:43:55.272 "uuid": "d51f9920-1381-430a-823f-ef1c7d200eb5", 00:43:55.272 "strip_size_kb": 0, 00:43:55.272 "state": "online", 00:43:55.272 "raid_level": "raid1", 00:43:55.272 "superblock": true, 00:43:55.272 "num_base_bdevs": 2, 00:43:55.272 "num_base_bdevs_discovered": 1, 00:43:55.272 "num_base_bdevs_operational": 1, 00:43:55.272 "base_bdevs_list": [ 00:43:55.272 { 00:43:55.272 "name": null, 00:43:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:55.272 "is_configured": false, 00:43:55.272 "data_offset": 0, 00:43:55.272 "data_size": 7936 00:43:55.272 }, 00:43:55.272 { 00:43:55.272 "name": "pt2", 00:43:55.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:55.272 "is_configured": true, 00:43:55.272 "data_offset": 256, 00:43:55.272 "data_size": 7936 00:43:55.272 } 00:43:55.272 ] 00:43:55.272 }' 00:43:55.272 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:55.272 17:40:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.532 [2024-11-26 17:40:56.157825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:55.532 [2024-11-26 17:40:56.157955] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:55.532 [2024-11-26 17:40:56.158096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:55.532 [2024-11-26 17:40:56.158178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:55.532 [2024-11-26 17:40:56.158223] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:43:55.532 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:43:55.533 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:43:55.533 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:43:55.533 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:55.533 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.533 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.533 [2024-11-26 17:40:56.221706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:55.533 [2024-11-26 17:40:56.221800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:55.533 [2024-11-26 17:40:56.221821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:43:55.533 [2024-11-26 17:40:56.221833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:55.533 [2024-11-26 17:40:56.224353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:55.533 [2024-11-26 17:40:56.224404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:55.533 [2024-11-26 17:40:56.224476] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:43:55.533 [2024-11-26 17:40:56.224558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:55.533 [2024-11-26 17:40:56.224644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:43:55.533 [2024-11-26 17:40:56.224659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:55.533 [2024-11-26 17:40:56.224770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:43:55.533 [2024-11-26 17:40:56.224849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:43:55.533 [2024-11-26 17:40:56.224858] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:43:55.533 [2024-11-26 17:40:56.224937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:55.792 pt2 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:55.792 "name": "raid_bdev1", 00:43:55.792 "uuid": "d51f9920-1381-430a-823f-ef1c7d200eb5", 00:43:55.792 "strip_size_kb": 0, 00:43:55.792 "state": "online", 00:43:55.792 "raid_level": "raid1", 00:43:55.792 "superblock": true, 00:43:55.792 "num_base_bdevs": 2, 00:43:55.792 "num_base_bdevs_discovered": 1, 00:43:55.792 "num_base_bdevs_operational": 1, 00:43:55.792 "base_bdevs_list": [ 00:43:55.792 { 00:43:55.792 "name": null, 00:43:55.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:55.792 "is_configured": false, 00:43:55.792 "data_offset": 256, 00:43:55.792 "data_size": 7936 00:43:55.792 }, 00:43:55.792 { 00:43:55.792 "name": "pt2", 00:43:55.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:55.792 "is_configured": true, 00:43:55.792 "data_offset": 256, 00:43:55.792 "data_size": 7936 00:43:55.792 } 00:43:55.792 ] 00:43:55.792 }' 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:55.792 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:56.050 [2024-11-26 17:40:56.696879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:56.050 [2024-11-26 17:40:56.697031] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:56.050 [2024-11-26 17:40:56.697179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:56.050 [2024-11-26 17:40:56.697272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:56.050 [2024-11-26 17:40:56.697340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.050 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:56.307 [2024-11-26 17:40:56.744831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:56.307 [2024-11-26 17:40:56.744976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:56.307 [2024-11-26 17:40:56.745032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:43:56.307 [2024-11-26 17:40:56.745066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:56.307 [2024-11-26 17:40:56.747629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:56.307 [2024-11-26 17:40:56.747704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:56.307 [2024-11-26 17:40:56.747803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:43:56.307 [2024-11-26 17:40:56.747887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:56.307 [2024-11-26 17:40:56.748038] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:43:56.307 [2024-11-26 17:40:56.748087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:56.307 [2024-11-26 17:40:56.748131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:43:56.307 [2024-11-26 17:40:56.748239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:56.307 [2024-11-26 17:40:56.748356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:43:56.307 [2024-11-26 17:40:56.748391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:56.307 [2024-11-26 17:40:56.748493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:43:56.307 [2024-11-26 17:40:56.748643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:43:56.307 [2024-11-26 17:40:56.748684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:43:56.307 [2024-11-26 17:40:56.748894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:56.307 pt1 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:56.307 "name": "raid_bdev1", 00:43:56.307 "uuid": "d51f9920-1381-430a-823f-ef1c7d200eb5", 00:43:56.307 "strip_size_kb": 0, 00:43:56.307 "state": "online", 00:43:56.307 "raid_level": "raid1", 00:43:56.307 "superblock": true, 00:43:56.307 "num_base_bdevs": 2, 00:43:56.307 "num_base_bdevs_discovered": 1, 00:43:56.307 "num_base_bdevs_operational": 1, 00:43:56.307 "base_bdevs_list": [ 00:43:56.307 { 00:43:56.307 "name": null, 00:43:56.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:56.307 "is_configured": false, 00:43:56.307 "data_offset": 256, 00:43:56.307 "data_size": 7936 00:43:56.307 }, 00:43:56.307 { 00:43:56.307 "name": "pt2", 00:43:56.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:43:56.307 "is_configured": true, 00:43:56.307 "data_offset": 256, 00:43:56.307 "data_size": 7936 00:43:56.307 } 00:43:56.307 ] 00:43:56.307 }' 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:56.307 17:40:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:56.564 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:43:56.564 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:43:56.564 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.564 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:56.564 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:56.824 [2024-11-26 17:40:57.292339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' d51f9920-1381-430a-823f-ef1c7d200eb5 '!=' d51f9920-1381-430a-823f-ef1c7d200eb5 ']' 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89070 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89070 ']' 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89070 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89070 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:56.824 killing process with pid 89070 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89070' 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89070 00:43:56.824 [2024-11-26 17:40:57.374420] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:56.824 [2024-11-26 17:40:57.374561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:56.824 17:40:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89070 00:43:56.824 [2024-11-26 17:40:57.374621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:56.824 [2024-11-26 17:40:57.374640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:43:57.083 [2024-11-26 17:40:57.618203] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:58.461 17:40:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:43:58.461 00:43:58.461 real 0m6.551s 00:43:58.461 user 0m9.716s 00:43:58.461 sys 0m1.285s 00:43:58.461 17:40:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:58.461 17:40:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:58.461 ************************************ 00:43:58.461 END TEST raid_superblock_test_md_interleaved 00:43:58.461 ************************************ 00:43:58.461 17:40:58 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:43:58.461 17:40:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:43:58.461 17:40:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:58.461 17:40:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:43:58.461 ************************************ 00:43:58.461 START TEST raid_rebuild_test_sb_md_interleaved 00:43:58.461 ************************************ 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89400 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89400 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89400 ']' 00:43:58.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:58.461 17:40:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:58.461 [2024-11-26 17:40:59.124217] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:58.461 [2024-11-26 17:40:59.124497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89400 ] 00:43:58.461 I/O size of 3145728 is greater than zero copy threshold (65536). 00:43:58.461 Zero copy mechanism will not be used. 00:43:58.719 [2024-11-26 17:40:59.311867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:58.977 [2024-11-26 17:40:59.469755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:59.236 [2024-11-26 17:40:59.751042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:59.236 [2024-11-26 17:40:59.751188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:59.511 BaseBdev1_malloc 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:59.511 [2024-11-26 17:41:00.105981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:43:59.511 [2024-11-26 17:41:00.106083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:59.511 [2024-11-26 17:41:00.106115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:43:59.511 [2024-11-26 17:41:00.106142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:59.511 [2024-11-26 17:41:00.108803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:59.511 [2024-11-26 17:41:00.108852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:43:59.511 BaseBdev1 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:59.511 BaseBdev2_malloc 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:59.511 [2024-11-26 17:41:00.172869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:43:59.511 [2024-11-26 17:41:00.172972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:59.511 [2024-11-26 17:41:00.172998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:43:59.511 [2024-11-26 17:41:00.173015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:59.511 [2024-11-26 17:41:00.175601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:59.511 [2024-11-26 17:41:00.175646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:43:59.511 BaseBdev2 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.511 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:59.851 spare_malloc 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:59.851 spare_delay 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:59.851 [2024-11-26 17:41:00.265694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:59.851 [2024-11-26 17:41:00.265889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:59.851 [2024-11-26 17:41:00.265926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:43:59.851 [2024-11-26 17:41:00.265941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:59.851 [2024-11-26 17:41:00.268583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:59.851 [2024-11-26 17:41:00.268637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:59.851 spare 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:59.851 [2024-11-26 17:41:00.277771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:59.851 [2024-11-26 17:41:00.280469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:59.851 [2024-11-26 17:41:00.280890] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:43:59.851 [2024-11-26 17:41:00.280919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:43:59.851 [2024-11-26 17:41:00.281050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:43:59.851 [2024-11-26 17:41:00.281150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:43:59.851 [2024-11-26 17:41:00.281162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:43:59.851 [2024-11-26 17:41:00.281285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:43:59.851 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:43:59.852 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:43:59.852 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:43:59.852 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:59.852 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:59.852 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:43:59.852 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:59.852 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:43:59.852 "name": "raid_bdev1", 00:43:59.852 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:43:59.852 "strip_size_kb": 0, 00:43:59.852 "state": "online", 00:43:59.852 "raid_level": "raid1", 00:43:59.852 "superblock": true, 00:43:59.852 "num_base_bdevs": 2, 00:43:59.852 "num_base_bdevs_discovered": 2, 00:43:59.852 "num_base_bdevs_operational": 2, 00:43:59.852 "base_bdevs_list": [ 00:43:59.852 { 00:43:59.852 "name": "BaseBdev1", 00:43:59.852 "uuid": "c35ff4cd-4e1c-5883-aa01-6319937c97b8", 00:43:59.852 "is_configured": true, 00:43:59.852 "data_offset": 256, 00:43:59.852 "data_size": 7936 00:43:59.852 }, 00:43:59.852 { 00:43:59.852 "name": "BaseBdev2", 00:43:59.852 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:43:59.852 "is_configured": true, 00:43:59.852 "data_offset": 256, 00:43:59.852 "data_size": 7936 00:43:59.852 } 00:43:59.852 ] 00:43:59.852 }' 00:43:59.852 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:43:59.852 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:00.111 [2024-11-26 17:41:00.741691] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:00.111 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:00.371 [2024-11-26 17:41:00.828839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:44:00.371 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:00.372 "name": "raid_bdev1", 00:44:00.372 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:00.372 "strip_size_kb": 0, 00:44:00.372 "state": "online", 00:44:00.372 "raid_level": "raid1", 00:44:00.372 "superblock": true, 00:44:00.372 "num_base_bdevs": 2, 00:44:00.372 "num_base_bdevs_discovered": 1, 00:44:00.372 "num_base_bdevs_operational": 1, 00:44:00.372 "base_bdevs_list": [ 00:44:00.372 { 00:44:00.372 "name": null, 00:44:00.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:00.372 "is_configured": false, 00:44:00.372 "data_offset": 0, 00:44:00.372 "data_size": 7936 00:44:00.372 }, 00:44:00.372 { 00:44:00.372 "name": "BaseBdev2", 00:44:00.372 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:00.372 "is_configured": true, 00:44:00.372 "data_offset": 256, 00:44:00.372 "data_size": 7936 00:44:00.372 } 00:44:00.372 ] 00:44:00.372 }' 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:00.372 17:41:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:00.632 17:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:44:00.632 17:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.632 17:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:00.632 [2024-11-26 17:41:01.280275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:00.632 [2024-11-26 17:41:01.302777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:44:00.632 17:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.632 17:41:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:44:00.632 [2024-11-26 17:41:01.305590] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.011 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:02.011 "name": "raid_bdev1", 00:44:02.011 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:02.011 "strip_size_kb": 0, 00:44:02.012 "state": "online", 00:44:02.012 "raid_level": "raid1", 00:44:02.012 "superblock": true, 00:44:02.012 "num_base_bdevs": 2, 00:44:02.012 "num_base_bdevs_discovered": 2, 00:44:02.012 "num_base_bdevs_operational": 2, 00:44:02.012 "process": { 00:44:02.012 "type": "rebuild", 00:44:02.012 "target": "spare", 00:44:02.012 "progress": { 00:44:02.012 "blocks": 2560, 00:44:02.012 "percent": 32 00:44:02.012 } 00:44:02.012 }, 00:44:02.012 "base_bdevs_list": [ 00:44:02.012 { 00:44:02.012 "name": "spare", 00:44:02.012 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:02.012 "is_configured": true, 00:44:02.012 "data_offset": 256, 00:44:02.012 "data_size": 7936 00:44:02.012 }, 00:44:02.012 { 00:44:02.012 "name": "BaseBdev2", 00:44:02.012 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:02.012 "is_configured": true, 00:44:02.012 "data_offset": 256, 00:44:02.012 "data_size": 7936 00:44:02.012 } 00:44:02.012 ] 00:44:02.012 }' 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:02.012 [2024-11-26 17:41:02.444962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:02.012 [2024-11-26 17:41:02.516845] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:44:02.012 [2024-11-26 17:41:02.517071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:02.012 [2024-11-26 17:41:02.517090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:02.012 [2024-11-26 17:41:02.517106] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:02.012 "name": "raid_bdev1", 00:44:02.012 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:02.012 "strip_size_kb": 0, 00:44:02.012 "state": "online", 00:44:02.012 "raid_level": "raid1", 00:44:02.012 "superblock": true, 00:44:02.012 "num_base_bdevs": 2, 00:44:02.012 "num_base_bdevs_discovered": 1, 00:44:02.012 "num_base_bdevs_operational": 1, 00:44:02.012 "base_bdevs_list": [ 00:44:02.012 { 00:44:02.012 "name": null, 00:44:02.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:02.012 "is_configured": false, 00:44:02.012 "data_offset": 0, 00:44:02.012 "data_size": 7936 00:44:02.012 }, 00:44:02.012 { 00:44:02.012 "name": "BaseBdev2", 00:44:02.012 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:02.012 "is_configured": true, 00:44:02.012 "data_offset": 256, 00:44:02.012 "data_size": 7936 00:44:02.012 } 00:44:02.012 ] 00:44:02.012 }' 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:02.012 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:02.582 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:02.582 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:02.582 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:02.582 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:02.582 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:02.582 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:02.582 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.582 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:02.582 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:02.582 17:41:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.582 17:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:02.582 "name": "raid_bdev1", 00:44:02.582 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:02.582 "strip_size_kb": 0, 00:44:02.582 "state": "online", 00:44:02.582 "raid_level": "raid1", 00:44:02.582 "superblock": true, 00:44:02.583 "num_base_bdevs": 2, 00:44:02.583 "num_base_bdevs_discovered": 1, 00:44:02.583 "num_base_bdevs_operational": 1, 00:44:02.583 "base_bdevs_list": [ 00:44:02.583 { 00:44:02.583 "name": null, 00:44:02.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:02.583 "is_configured": false, 00:44:02.583 "data_offset": 0, 00:44:02.583 "data_size": 7936 00:44:02.583 }, 00:44:02.583 { 00:44:02.583 "name": "BaseBdev2", 00:44:02.583 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:02.583 "is_configured": true, 00:44:02.583 "data_offset": 256, 00:44:02.583 "data_size": 7936 00:44:02.583 } 00:44:02.583 ] 00:44:02.583 }' 00:44:02.583 17:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:02.583 17:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:02.583 17:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:02.583 17:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:02.583 17:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:44:02.583 17:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:02.583 17:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:02.583 [2024-11-26 17:41:03.090675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:02.583 [2024-11-26 17:41:03.109548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:44:02.583 17:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:02.583 17:41:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:44:02.583 [2024-11-26 17:41:03.111822] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.520 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:03.520 "name": "raid_bdev1", 00:44:03.520 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:03.520 "strip_size_kb": 0, 00:44:03.520 "state": "online", 00:44:03.520 "raid_level": "raid1", 00:44:03.520 "superblock": true, 00:44:03.520 "num_base_bdevs": 2, 00:44:03.520 "num_base_bdevs_discovered": 2, 00:44:03.520 "num_base_bdevs_operational": 2, 00:44:03.520 "process": { 00:44:03.520 "type": "rebuild", 00:44:03.520 "target": "spare", 00:44:03.520 "progress": { 00:44:03.520 "blocks": 2560, 00:44:03.521 "percent": 32 00:44:03.521 } 00:44:03.521 }, 00:44:03.521 "base_bdevs_list": [ 00:44:03.521 { 00:44:03.521 "name": "spare", 00:44:03.521 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:03.521 "is_configured": true, 00:44:03.521 "data_offset": 256, 00:44:03.521 "data_size": 7936 00:44:03.521 }, 00:44:03.521 { 00:44:03.521 "name": "BaseBdev2", 00:44:03.521 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:03.521 "is_configured": true, 00:44:03.521 "data_offset": 256, 00:44:03.521 "data_size": 7936 00:44:03.521 } 00:44:03.521 ] 00:44:03.521 }' 00:44:03.521 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:44:03.780 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=759 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:03.780 "name": "raid_bdev1", 00:44:03.780 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:03.780 "strip_size_kb": 0, 00:44:03.780 "state": "online", 00:44:03.780 "raid_level": "raid1", 00:44:03.780 "superblock": true, 00:44:03.780 "num_base_bdevs": 2, 00:44:03.780 "num_base_bdevs_discovered": 2, 00:44:03.780 "num_base_bdevs_operational": 2, 00:44:03.780 "process": { 00:44:03.780 "type": "rebuild", 00:44:03.780 "target": "spare", 00:44:03.780 "progress": { 00:44:03.780 "blocks": 2816, 00:44:03.780 "percent": 35 00:44:03.780 } 00:44:03.780 }, 00:44:03.780 "base_bdevs_list": [ 00:44:03.780 { 00:44:03.780 "name": "spare", 00:44:03.780 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:03.780 "is_configured": true, 00:44:03.780 "data_offset": 256, 00:44:03.780 "data_size": 7936 00:44:03.780 }, 00:44:03.780 { 00:44:03.780 "name": "BaseBdev2", 00:44:03.780 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:03.780 "is_configured": true, 00:44:03.780 "data_offset": 256, 00:44:03.780 "data_size": 7936 00:44:03.780 } 00:44:03.780 ] 00:44:03.780 }' 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:03.780 17:41:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:05.160 "name": "raid_bdev1", 00:44:05.160 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:05.160 "strip_size_kb": 0, 00:44:05.160 "state": "online", 00:44:05.160 "raid_level": "raid1", 00:44:05.160 "superblock": true, 00:44:05.160 "num_base_bdevs": 2, 00:44:05.160 "num_base_bdevs_discovered": 2, 00:44:05.160 "num_base_bdevs_operational": 2, 00:44:05.160 "process": { 00:44:05.160 "type": "rebuild", 00:44:05.160 "target": "spare", 00:44:05.160 "progress": { 00:44:05.160 "blocks": 5888, 00:44:05.160 "percent": 74 00:44:05.160 } 00:44:05.160 }, 00:44:05.160 "base_bdevs_list": [ 00:44:05.160 { 00:44:05.160 "name": "spare", 00:44:05.160 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:05.160 "is_configured": true, 00:44:05.160 "data_offset": 256, 00:44:05.160 "data_size": 7936 00:44:05.160 }, 00:44:05.160 { 00:44:05.160 "name": "BaseBdev2", 00:44:05.160 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:05.160 "is_configured": true, 00:44:05.160 "data_offset": 256, 00:44:05.160 "data_size": 7936 00:44:05.160 } 00:44:05.160 ] 00:44:05.160 }' 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:05.160 17:41:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:44:05.728 [2024-11-26 17:41:06.238599] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:44:05.728 [2024-11-26 17:41:06.238718] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:44:05.728 [2024-11-26 17:41:06.238897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:05.987 "name": "raid_bdev1", 00:44:05.987 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:05.987 "strip_size_kb": 0, 00:44:05.987 "state": "online", 00:44:05.987 "raid_level": "raid1", 00:44:05.987 "superblock": true, 00:44:05.987 "num_base_bdevs": 2, 00:44:05.987 "num_base_bdevs_discovered": 2, 00:44:05.987 "num_base_bdevs_operational": 2, 00:44:05.987 "base_bdevs_list": [ 00:44:05.987 { 00:44:05.987 "name": "spare", 00:44:05.987 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:05.987 "is_configured": true, 00:44:05.987 "data_offset": 256, 00:44:05.987 "data_size": 7936 00:44:05.987 }, 00:44:05.987 { 00:44:05.987 "name": "BaseBdev2", 00:44:05.987 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:05.987 "is_configured": true, 00:44:05.987 "data_offset": 256, 00:44:05.987 "data_size": 7936 00:44:05.987 } 00:44:05.987 ] 00:44:05.987 }' 00:44:05.987 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.247 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:06.247 "name": "raid_bdev1", 00:44:06.247 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:06.247 "strip_size_kb": 0, 00:44:06.247 "state": "online", 00:44:06.247 "raid_level": "raid1", 00:44:06.247 "superblock": true, 00:44:06.247 "num_base_bdevs": 2, 00:44:06.247 "num_base_bdevs_discovered": 2, 00:44:06.247 "num_base_bdevs_operational": 2, 00:44:06.247 "base_bdevs_list": [ 00:44:06.247 { 00:44:06.247 "name": "spare", 00:44:06.247 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:06.247 "is_configured": true, 00:44:06.247 "data_offset": 256, 00:44:06.247 "data_size": 7936 00:44:06.247 }, 00:44:06.247 { 00:44:06.247 "name": "BaseBdev2", 00:44:06.247 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:06.247 "is_configured": true, 00:44:06.247 "data_offset": 256, 00:44:06.248 "data_size": 7936 00:44:06.248 } 00:44:06.248 ] 00:44:06.248 }' 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:06.248 "name": "raid_bdev1", 00:44:06.248 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:06.248 "strip_size_kb": 0, 00:44:06.248 "state": "online", 00:44:06.248 "raid_level": "raid1", 00:44:06.248 "superblock": true, 00:44:06.248 "num_base_bdevs": 2, 00:44:06.248 "num_base_bdevs_discovered": 2, 00:44:06.248 "num_base_bdevs_operational": 2, 00:44:06.248 "base_bdevs_list": [ 00:44:06.248 { 00:44:06.248 "name": "spare", 00:44:06.248 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:06.248 "is_configured": true, 00:44:06.248 "data_offset": 256, 00:44:06.248 "data_size": 7936 00:44:06.248 }, 00:44:06.248 { 00:44:06.248 "name": "BaseBdev2", 00:44:06.248 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:06.248 "is_configured": true, 00:44:06.248 "data_offset": 256, 00:44:06.248 "data_size": 7936 00:44:06.248 } 00:44:06.248 ] 00:44:06.248 }' 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:06.248 17:41:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:44:06.818 spare 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:06.818 [2024-11-26 17:41:07.304925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:44:06.818 [2024-11-26 17:41:07.305060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:06.818 [2024-11-26 17:41:07.305249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:06.818 [2024-11-26 17:41:07.305336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:06.818 [2024-11-26 17:41:07.305351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:06.818 [2024-11-26 17:41:07.376734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:44:06.818 [2024-11-26 17:41:07.376875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:06.818 [2024-11-26 17:41:07.376928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:44:06.818 [2024-11-26 17:41:07.376961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:06.818 [2024-11-26 17:41:07.379472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:06.818 [2024-11-26 17:41:07.379564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:44:06.818 [2024-11-26 17:41:07.379662] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:44:06.818 [2024-11-26 17:41:07.379767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:06.818 [2024-11-26 17:41:07.379939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:06.818 [2024-11-26 17:41:07.479902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:44:06.818 [2024-11-26 17:41:07.480054] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:44:06.818 [2024-11-26 17:41:07.480236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:44:06.818 [2024-11-26 17:41:07.480408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:44:06.818 [2024-11-26 17:41:07.480421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:44:06.818 [2024-11-26 17:41:07.480580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:06.818 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.078 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:07.078 "name": "raid_bdev1", 00:44:07.078 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:07.078 "strip_size_kb": 0, 00:44:07.078 "state": "online", 00:44:07.078 "raid_level": "raid1", 00:44:07.078 "superblock": true, 00:44:07.078 "num_base_bdevs": 2, 00:44:07.078 "num_base_bdevs_discovered": 2, 00:44:07.078 "num_base_bdevs_operational": 2, 00:44:07.078 "base_bdevs_list": [ 00:44:07.078 { 00:44:07.078 "name": "spare", 00:44:07.078 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:07.078 "is_configured": true, 00:44:07.078 "data_offset": 256, 00:44:07.078 "data_size": 7936 00:44:07.078 }, 00:44:07.078 { 00:44:07.078 "name": "BaseBdev2", 00:44:07.078 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:07.078 "is_configured": true, 00:44:07.078 "data_offset": 256, 00:44:07.078 "data_size": 7936 00:44:07.078 } 00:44:07.078 ] 00:44:07.078 }' 00:44:07.078 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:07.078 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:07.346 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:07.346 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:07.346 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:07.346 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:07.346 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:07.346 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:07.346 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.346 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:07.346 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:07.346 17:41:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.346 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:07.346 "name": "raid_bdev1", 00:44:07.346 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:07.346 "strip_size_kb": 0, 00:44:07.346 "state": "online", 00:44:07.346 "raid_level": "raid1", 00:44:07.346 "superblock": true, 00:44:07.346 "num_base_bdevs": 2, 00:44:07.346 "num_base_bdevs_discovered": 2, 00:44:07.346 "num_base_bdevs_operational": 2, 00:44:07.346 "base_bdevs_list": [ 00:44:07.346 { 00:44:07.346 "name": "spare", 00:44:07.346 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:07.346 "is_configured": true, 00:44:07.346 "data_offset": 256, 00:44:07.346 "data_size": 7936 00:44:07.346 }, 00:44:07.346 { 00:44:07.346 "name": "BaseBdev2", 00:44:07.346 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:07.346 "is_configured": true, 00:44:07.346 "data_offset": 256, 00:44:07.346 "data_size": 7936 00:44:07.346 } 00:44:07.346 ] 00:44:07.346 }' 00:44:07.346 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:07.616 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:07.616 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:07.616 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:07.616 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:07.616 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.616 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:07.617 [2024-11-26 17:41:08.143747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:07.617 "name": "raid_bdev1", 00:44:07.617 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:07.617 "strip_size_kb": 0, 00:44:07.617 "state": "online", 00:44:07.617 "raid_level": "raid1", 00:44:07.617 "superblock": true, 00:44:07.617 "num_base_bdevs": 2, 00:44:07.617 "num_base_bdevs_discovered": 1, 00:44:07.617 "num_base_bdevs_operational": 1, 00:44:07.617 "base_bdevs_list": [ 00:44:07.617 { 00:44:07.617 "name": null, 00:44:07.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:07.617 "is_configured": false, 00:44:07.617 "data_offset": 0, 00:44:07.617 "data_size": 7936 00:44:07.617 }, 00:44:07.617 { 00:44:07.617 "name": "BaseBdev2", 00:44:07.617 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:07.617 "is_configured": true, 00:44:07.617 "data_offset": 256, 00:44:07.617 "data_size": 7936 00:44:07.617 } 00:44:07.617 ] 00:44:07.617 }' 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:07.617 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:08.187 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:44:08.187 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:08.187 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:08.187 [2024-11-26 17:41:08.610978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:08.187 [2024-11-26 17:41:08.611338] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:44:08.187 [2024-11-26 17:41:08.611426] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:44:08.187 [2024-11-26 17:41:08.611520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:08.187 [2024-11-26 17:41:08.630708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:44:08.187 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:08.187 17:41:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:44:08.187 [2024-11-26 17:41:08.633033] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:09.126 "name": "raid_bdev1", 00:44:09.126 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:09.126 "strip_size_kb": 0, 00:44:09.126 "state": "online", 00:44:09.126 "raid_level": "raid1", 00:44:09.126 "superblock": true, 00:44:09.126 "num_base_bdevs": 2, 00:44:09.126 "num_base_bdevs_discovered": 2, 00:44:09.126 "num_base_bdevs_operational": 2, 00:44:09.126 "process": { 00:44:09.126 "type": "rebuild", 00:44:09.126 "target": "spare", 00:44:09.126 "progress": { 00:44:09.126 "blocks": 2560, 00:44:09.126 "percent": 32 00:44:09.126 } 00:44:09.126 }, 00:44:09.126 "base_bdevs_list": [ 00:44:09.126 { 00:44:09.126 "name": "spare", 00:44:09.126 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:09.126 "is_configured": true, 00:44:09.126 "data_offset": 256, 00:44:09.126 "data_size": 7936 00:44:09.126 }, 00:44:09.126 { 00:44:09.126 "name": "BaseBdev2", 00:44:09.126 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:09.126 "is_configured": true, 00:44:09.126 "data_offset": 256, 00:44:09.126 "data_size": 7936 00:44:09.126 } 00:44:09.126 ] 00:44:09.126 }' 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.126 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:09.126 [2024-11-26 17:41:09.792804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:09.387 [2024-11-26 17:41:09.842474] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:44:09.387 [2024-11-26 17:41:09.842578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:09.387 [2024-11-26 17:41:09.842595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:09.387 [2024-11-26 17:41:09.842607] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:09.387 "name": "raid_bdev1", 00:44:09.387 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:09.387 "strip_size_kb": 0, 00:44:09.387 "state": "online", 00:44:09.387 "raid_level": "raid1", 00:44:09.387 "superblock": true, 00:44:09.387 "num_base_bdevs": 2, 00:44:09.387 "num_base_bdevs_discovered": 1, 00:44:09.387 "num_base_bdevs_operational": 1, 00:44:09.387 "base_bdevs_list": [ 00:44:09.387 { 00:44:09.387 "name": null, 00:44:09.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:09.387 "is_configured": false, 00:44:09.387 "data_offset": 0, 00:44:09.387 "data_size": 7936 00:44:09.387 }, 00:44:09.387 { 00:44:09.387 "name": "BaseBdev2", 00:44:09.387 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:09.387 "is_configured": true, 00:44:09.387 "data_offset": 256, 00:44:09.387 "data_size": 7936 00:44:09.387 } 00:44:09.387 ] 00:44:09.387 }' 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:09.387 17:41:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:09.647 17:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:44:09.647 17:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.647 17:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:09.647 [2024-11-26 17:41:10.334691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:44:09.647 [2024-11-26 17:41:10.334874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:09.647 [2024-11-26 17:41:10.334930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:44:09.647 [2024-11-26 17:41:10.334970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:09.647 [2024-11-26 17:41:10.335263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:09.647 [2024-11-26 17:41:10.335315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:44:09.647 [2024-11-26 17:41:10.335410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:44:09.647 [2024-11-26 17:41:10.335469] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:44:09.647 [2024-11-26 17:41:10.335544] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:44:09.647 [2024-11-26 17:41:10.335625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:09.906 [2024-11-26 17:41:10.355293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:44:09.906 spare 00:44:09.906 17:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.906 17:41:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:44:09.906 [2024-11-26 17:41:10.357564] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:10.843 "name": "raid_bdev1", 00:44:10.843 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:10.843 "strip_size_kb": 0, 00:44:10.843 "state": "online", 00:44:10.843 "raid_level": "raid1", 00:44:10.843 "superblock": true, 00:44:10.843 "num_base_bdevs": 2, 00:44:10.843 "num_base_bdevs_discovered": 2, 00:44:10.843 "num_base_bdevs_operational": 2, 00:44:10.843 "process": { 00:44:10.843 "type": "rebuild", 00:44:10.843 "target": "spare", 00:44:10.843 "progress": { 00:44:10.843 "blocks": 2560, 00:44:10.843 "percent": 32 00:44:10.843 } 00:44:10.843 }, 00:44:10.843 "base_bdevs_list": [ 00:44:10.843 { 00:44:10.843 "name": "spare", 00:44:10.843 "uuid": "d4ba34af-6657-5522-b80d-7f6aab63d10e", 00:44:10.843 "is_configured": true, 00:44:10.843 "data_offset": 256, 00:44:10.843 "data_size": 7936 00:44:10.843 }, 00:44:10.843 { 00:44:10.843 "name": "BaseBdev2", 00:44:10.843 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:10.843 "is_configured": true, 00:44:10.843 "data_offset": 256, 00:44:10.843 "data_size": 7936 00:44:10.843 } 00:44:10.843 ] 00:44:10.843 }' 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.843 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:10.843 [2024-11-26 17:41:11.498392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:11.102 [2024-11-26 17:41:11.567150] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:44:11.102 [2024-11-26 17:41:11.567211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:11.102 [2024-11-26 17:41:11.567230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:11.102 [2024-11-26 17:41:11.567238] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:11.102 "name": "raid_bdev1", 00:44:11.102 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:11.102 "strip_size_kb": 0, 00:44:11.102 "state": "online", 00:44:11.102 "raid_level": "raid1", 00:44:11.102 "superblock": true, 00:44:11.102 "num_base_bdevs": 2, 00:44:11.102 "num_base_bdevs_discovered": 1, 00:44:11.102 "num_base_bdevs_operational": 1, 00:44:11.102 "base_bdevs_list": [ 00:44:11.102 { 00:44:11.102 "name": null, 00:44:11.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:11.102 "is_configured": false, 00:44:11.102 "data_offset": 0, 00:44:11.102 "data_size": 7936 00:44:11.102 }, 00:44:11.102 { 00:44:11.102 "name": "BaseBdev2", 00:44:11.102 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:11.102 "is_configured": true, 00:44:11.102 "data_offset": 256, 00:44:11.102 "data_size": 7936 00:44:11.102 } 00:44:11.102 ] 00:44:11.102 }' 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:11.102 17:41:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:11.360 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:11.360 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:11.360 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:11.360 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:11.360 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:11.360 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:11.729 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:11.729 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.729 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:11.729 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.729 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:11.729 "name": "raid_bdev1", 00:44:11.729 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:11.729 "strip_size_kb": 0, 00:44:11.729 "state": "online", 00:44:11.729 "raid_level": "raid1", 00:44:11.729 "superblock": true, 00:44:11.729 "num_base_bdevs": 2, 00:44:11.729 "num_base_bdevs_discovered": 1, 00:44:11.729 "num_base_bdevs_operational": 1, 00:44:11.729 "base_bdevs_list": [ 00:44:11.729 { 00:44:11.729 "name": null, 00:44:11.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:11.729 "is_configured": false, 00:44:11.729 "data_offset": 0, 00:44:11.729 "data_size": 7936 00:44:11.729 }, 00:44:11.729 { 00:44:11.729 "name": "BaseBdev2", 00:44:11.729 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:11.729 "is_configured": true, 00:44:11.729 "data_offset": 256, 00:44:11.729 "data_size": 7936 00:44:11.729 } 00:44:11.729 ] 00:44:11.729 }' 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:11.730 [2024-11-26 17:41:12.198209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:44:11.730 [2024-11-26 17:41:12.198331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:11.730 [2024-11-26 17:41:12.198364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:44:11.730 [2024-11-26 17:41:12.198374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:11.730 [2024-11-26 17:41:12.198645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:11.730 [2024-11-26 17:41:12.198662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:44:11.730 [2024-11-26 17:41:12.198726] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:44:11.730 [2024-11-26 17:41:12.198742] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:44:11.730 [2024-11-26 17:41:12.198762] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:44:11.730 [2024-11-26 17:41:12.198775] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:44:11.730 BaseBdev1 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.730 17:41:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:12.668 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:12.668 "name": "raid_bdev1", 00:44:12.668 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:12.668 "strip_size_kb": 0, 00:44:12.668 "state": "online", 00:44:12.668 "raid_level": "raid1", 00:44:12.668 "superblock": true, 00:44:12.668 "num_base_bdevs": 2, 00:44:12.669 "num_base_bdevs_discovered": 1, 00:44:12.669 "num_base_bdevs_operational": 1, 00:44:12.669 "base_bdevs_list": [ 00:44:12.669 { 00:44:12.669 "name": null, 00:44:12.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:12.669 "is_configured": false, 00:44:12.669 "data_offset": 0, 00:44:12.669 "data_size": 7936 00:44:12.669 }, 00:44:12.669 { 00:44:12.669 "name": "BaseBdev2", 00:44:12.669 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:12.669 "is_configured": true, 00:44:12.669 "data_offset": 256, 00:44:12.669 "data_size": 7936 00:44:12.669 } 00:44:12.669 ] 00:44:12.669 }' 00:44:12.669 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:12.669 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:13.238 "name": "raid_bdev1", 00:44:13.238 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:13.238 "strip_size_kb": 0, 00:44:13.238 "state": "online", 00:44:13.238 "raid_level": "raid1", 00:44:13.238 "superblock": true, 00:44:13.238 "num_base_bdevs": 2, 00:44:13.238 "num_base_bdevs_discovered": 1, 00:44:13.238 "num_base_bdevs_operational": 1, 00:44:13.238 "base_bdevs_list": [ 00:44:13.238 { 00:44:13.238 "name": null, 00:44:13.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:13.238 "is_configured": false, 00:44:13.238 "data_offset": 0, 00:44:13.238 "data_size": 7936 00:44:13.238 }, 00:44:13.238 { 00:44:13.238 "name": "BaseBdev2", 00:44:13.238 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:13.238 "is_configured": true, 00:44:13.238 "data_offset": 256, 00:44:13.238 "data_size": 7936 00:44:13.238 } 00:44:13.238 ] 00:44:13.238 }' 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:13.238 [2024-11-26 17:41:13.799843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:13.238 [2024-11-26 17:41:13.800095] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:44:13.238 [2024-11-26 17:41:13.800157] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:44:13.238 request: 00:44:13.238 { 00:44:13.238 "base_bdev": "BaseBdev1", 00:44:13.238 "raid_bdev": "raid_bdev1", 00:44:13.238 "method": "bdev_raid_add_base_bdev", 00:44:13.238 "req_id": 1 00:44:13.238 } 00:44:13.238 Got JSON-RPC error response 00:44:13.238 response: 00:44:13.238 { 00:44:13.238 "code": -22, 00:44:13.238 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:44:13.238 } 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:13.238 17:41:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:44:14.177 "name": "raid_bdev1", 00:44:14.177 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:14.177 "strip_size_kb": 0, 00:44:14.177 "state": "online", 00:44:14.177 "raid_level": "raid1", 00:44:14.177 "superblock": true, 00:44:14.177 "num_base_bdevs": 2, 00:44:14.177 "num_base_bdevs_discovered": 1, 00:44:14.177 "num_base_bdevs_operational": 1, 00:44:14.177 "base_bdevs_list": [ 00:44:14.177 { 00:44:14.177 "name": null, 00:44:14.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:14.177 "is_configured": false, 00:44:14.177 "data_offset": 0, 00:44:14.177 "data_size": 7936 00:44:14.177 }, 00:44:14.177 { 00:44:14.177 "name": "BaseBdev2", 00:44:14.177 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:14.177 "is_configured": true, 00:44:14.177 "data_offset": 256, 00:44:14.177 "data_size": 7936 00:44:14.177 } 00:44:14.177 ] 00:44:14.177 }' 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:44:14.177 17:41:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:44:14.752 "name": "raid_bdev1", 00:44:14.752 "uuid": "bb5382d5-44d4-48c4-9e37-58d8fcfde8dd", 00:44:14.752 "strip_size_kb": 0, 00:44:14.752 "state": "online", 00:44:14.752 "raid_level": "raid1", 00:44:14.752 "superblock": true, 00:44:14.752 "num_base_bdevs": 2, 00:44:14.752 "num_base_bdevs_discovered": 1, 00:44:14.752 "num_base_bdevs_operational": 1, 00:44:14.752 "base_bdevs_list": [ 00:44:14.752 { 00:44:14.752 "name": null, 00:44:14.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:14.752 "is_configured": false, 00:44:14.752 "data_offset": 0, 00:44:14.752 "data_size": 7936 00:44:14.752 }, 00:44:14.752 { 00:44:14.752 "name": "BaseBdev2", 00:44:14.752 "uuid": "b0af9227-83ea-52b3-a1e7-2cf5bb58d87b", 00:44:14.752 "is_configured": true, 00:44:14.752 "data_offset": 256, 00:44:14.752 "data_size": 7936 00:44:14.752 } 00:44:14.752 ] 00:44:14.752 }' 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89400 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89400 ']' 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89400 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:14.752 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89400 00:44:15.016 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:15.016 killing process with pid 89400 00:44:15.016 Received shutdown signal, test time was about 60.000000 seconds 00:44:15.016 00:44:15.017 Latency(us) 00:44:15.017 [2024-11-26T17:41:15.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:15.017 [2024-11-26T17:41:15.712Z] =================================================================================================================== 00:44:15.017 [2024-11-26T17:41:15.712Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:15.017 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:15.017 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89400' 00:44:15.017 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89400 00:44:15.017 [2024-11-26 17:41:15.452377] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:44:15.017 [2024-11-26 17:41:15.452565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:15.017 17:41:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89400 00:44:15.017 [2024-11-26 17:41:15.452640] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:15.017 [2024-11-26 17:41:15.452654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:44:15.275 [2024-11-26 17:41:15.797824] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:44:16.652 17:41:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:44:16.652 00:44:16.652 real 0m18.120s 00:44:16.652 user 0m23.492s 00:44:16.652 sys 0m1.897s 00:44:16.652 17:41:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:16.652 ************************************ 00:44:16.652 END TEST raid_rebuild_test_sb_md_interleaved 00:44:16.652 ************************************ 00:44:16.652 17:41:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:44:16.652 17:41:17 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:44:16.652 17:41:17 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:44:16.652 17:41:17 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89400 ']' 00:44:16.652 17:41:17 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89400 00:44:16.652 17:41:17 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:44:16.652 00:44:16.652 real 12m21.610s 00:44:16.652 user 16m38.197s 00:44:16.652 sys 1m56.581s 00:44:16.652 ************************************ 00:44:16.652 END TEST bdev_raid 00:44:16.652 17:41:17 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:16.652 17:41:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:44:16.652 ************************************ 00:44:16.652 17:41:17 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:44:16.652 17:41:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:16.652 17:41:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:16.652 17:41:17 -- common/autotest_common.sh@10 -- # set +x 00:44:16.652 ************************************ 00:44:16.652 START TEST spdkcli_raid 00:44:16.652 ************************************ 00:44:16.652 17:41:17 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:44:16.911 * Looking for test storage... 00:44:16.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:44:16.911 17:41:17 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:16.911 17:41:17 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:44:16.911 17:41:17 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:16.911 17:41:17 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:16.911 17:41:17 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:44:16.911 17:41:17 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:16.911 17:41:17 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:16.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.911 --rc genhtml_branch_coverage=1 00:44:16.911 --rc genhtml_function_coverage=1 00:44:16.911 --rc genhtml_legend=1 00:44:16.911 --rc geninfo_all_blocks=1 00:44:16.911 --rc geninfo_unexecuted_blocks=1 00:44:16.911 00:44:16.911 ' 00:44:16.911 17:41:17 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:16.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.911 --rc genhtml_branch_coverage=1 00:44:16.911 --rc genhtml_function_coverage=1 00:44:16.911 --rc genhtml_legend=1 00:44:16.911 --rc geninfo_all_blocks=1 00:44:16.912 --rc geninfo_unexecuted_blocks=1 00:44:16.912 00:44:16.912 ' 00:44:16.912 17:41:17 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:16.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.912 --rc genhtml_branch_coverage=1 00:44:16.912 --rc genhtml_function_coverage=1 00:44:16.912 --rc genhtml_legend=1 00:44:16.912 --rc geninfo_all_blocks=1 00:44:16.912 --rc geninfo_unexecuted_blocks=1 00:44:16.912 00:44:16.912 ' 00:44:16.912 17:41:17 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:16.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:16.912 --rc genhtml_branch_coverage=1 00:44:16.912 --rc genhtml_function_coverage=1 00:44:16.912 --rc genhtml_legend=1 00:44:16.912 --rc geninfo_all_blocks=1 00:44:16.912 --rc geninfo_unexecuted_blocks=1 00:44:16.912 00:44:16.912 ' 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:44:16.912 17:41:17 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:44:16.912 17:41:17 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:16.912 17:41:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:16.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90082 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90082 00:44:16.912 17:41:17 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90082 ']' 00:44:16.912 17:41:17 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:16.912 17:41:17 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:44:16.912 17:41:17 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:16.912 17:41:17 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:16.912 17:41:17 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:16.912 17:41:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:17.171 [2024-11-26 17:41:17.623811] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:17.171 [2024-11-26 17:41:17.623961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90082 ] 00:44:17.171 [2024-11-26 17:41:17.809494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:17.431 [2024-11-26 17:41:17.969093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:17.431 [2024-11-26 17:41:17.969135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:18.809 17:41:19 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:18.809 17:41:19 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:44:18.809 17:41:19 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:44:18.809 17:41:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:18.809 17:41:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:18.809 17:41:19 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:44:18.809 17:41:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:18.809 17:41:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:18.809 17:41:19 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:18.809 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:18.809 ' 00:44:20.185 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:44:20.185 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:44:20.444 17:41:20 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:44:20.444 17:41:20 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:20.444 17:41:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:20.444 17:41:20 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:44:20.444 17:41:20 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:20.444 17:41:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:20.444 17:41:20 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:44:20.444 ' 00:44:21.384 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:44:21.642 17:41:22 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:44:21.642 17:41:22 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:21.642 17:41:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:21.642 17:41:22 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:44:21.643 17:41:22 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:21.643 17:41:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:21.643 17:41:22 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:44:21.643 17:41:22 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:44:22.210 17:41:22 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:44:22.210 17:41:22 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:44:22.210 17:41:22 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:44:22.210 17:41:22 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:22.210 17:41:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:22.210 17:41:22 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:44:22.210 17:41:22 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:22.210 17:41:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:22.210 17:41:22 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:44:22.210 ' 00:44:23.146 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:44:23.405 17:41:23 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:44:23.405 17:41:23 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:23.405 17:41:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:23.405 17:41:23 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:44:23.405 17:41:23 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:23.405 17:41:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:23.405 17:41:23 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:44:23.405 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:44:23.405 ' 00:44:24.781 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:44:24.781 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:44:25.040 17:41:25 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:25.040 17:41:25 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90082 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90082 ']' 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90082 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90082 00:44:25.040 killing process with pid 90082 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90082' 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90082 00:44:25.040 17:41:25 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90082 00:44:28.325 17:41:28 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:44:28.325 17:41:28 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90082 ']' 00:44:28.325 17:41:28 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90082 00:44:28.325 17:41:28 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90082 ']' 00:44:28.325 Process with pid 90082 is not found 00:44:28.325 17:41:28 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90082 00:44:28.325 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90082) - No such process 00:44:28.325 17:41:28 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90082 is not found' 00:44:28.325 17:41:28 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:44:28.325 17:41:28 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:28.325 17:41:28 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:28.325 17:41:28 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:28.325 ************************************ 00:44:28.325 END TEST spdkcli_raid 00:44:28.325 ************************************ 00:44:28.325 00:44:28.325 real 0m11.007s 00:44:28.325 user 0m22.438s 00:44:28.325 sys 0m1.393s 00:44:28.325 17:41:28 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:28.325 17:41:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:44:28.325 17:41:28 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:44:28.325 17:41:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:28.325 17:41:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:28.325 17:41:28 -- common/autotest_common.sh@10 -- # set +x 00:44:28.325 ************************************ 00:44:28.325 START TEST blockdev_raid5f 00:44:28.325 ************************************ 00:44:28.325 17:41:28 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:44:28.325 * Looking for test storage... 00:44:28.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:44:28.325 17:41:28 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:44:28.325 17:41:28 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:44:28.325 17:41:28 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:44:28.325 17:41:28 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:44:28.325 17:41:28 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:28.325 17:41:28 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:28.325 17:41:28 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:28.325 17:41:28 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:28.326 17:41:28 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:44:28.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.326 --rc genhtml_branch_coverage=1 00:44:28.326 --rc genhtml_function_coverage=1 00:44:28.326 --rc genhtml_legend=1 00:44:28.326 --rc geninfo_all_blocks=1 00:44:28.326 --rc geninfo_unexecuted_blocks=1 00:44:28.326 00:44:28.326 ' 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:44:28.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.326 --rc genhtml_branch_coverage=1 00:44:28.326 --rc genhtml_function_coverage=1 00:44:28.326 --rc genhtml_legend=1 00:44:28.326 --rc geninfo_all_blocks=1 00:44:28.326 --rc geninfo_unexecuted_blocks=1 00:44:28.326 00:44:28.326 ' 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:44:28.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.326 --rc genhtml_branch_coverage=1 00:44:28.326 --rc genhtml_function_coverage=1 00:44:28.326 --rc genhtml_legend=1 00:44:28.326 --rc geninfo_all_blocks=1 00:44:28.326 --rc geninfo_unexecuted_blocks=1 00:44:28.326 00:44:28.326 ' 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:44:28.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:28.326 --rc genhtml_branch_coverage=1 00:44:28.326 --rc genhtml_function_coverage=1 00:44:28.326 --rc genhtml_legend=1 00:44:28.326 --rc geninfo_all_blocks=1 00:44:28.326 --rc geninfo_unexecuted_blocks=1 00:44:28.326 00:44:28.326 ' 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90368 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:44:28.326 17:41:28 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90368 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90368 ']' 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:28.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:28.326 17:41:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:28.326 [2024-11-26 17:41:28.714078] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:28.326 [2024-11-26 17:41:28.714308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90368 ] 00:44:28.326 [2024-11-26 17:41:28.892282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:28.584 [2024-11-26 17:41:29.033839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:29.519 17:41:30 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:29.519 17:41:30 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:44:29.519 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:44:29.519 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:44:29.519 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:44:29.519 17:41:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.519 17:41:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:29.519 Malloc0 00:44:29.519 Malloc1 00:44:29.777 Malloc2 00:44:29.777 17:41:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.777 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:44:29.777 17:41:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "817172e9-7f64-4b39-97fd-b12eba111586"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "817172e9-7f64-4b39-97fd-b12eba111586",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "817172e9-7f64-4b39-97fd-b12eba111586",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "22c99141-88b7-4379-8d1c-7dd29cb95dbf",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "836ea76d-1d11-4a0a-a8e4-3cb33b2cf7fb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "13474d27-61e3-4c0e-8117-8f4fb3009245",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:44:29.778 17:41:30 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90368 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90368 ']' 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90368 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90368 00:44:29.778 killing process with pid 90368 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90368' 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90368 00:44:29.778 17:41:30 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90368 00:44:33.065 17:41:33 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:44:33.065 17:41:33 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:44:33.065 17:41:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:44:33.065 17:41:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:33.065 17:41:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:33.065 ************************************ 00:44:33.065 START TEST bdev_hello_world 00:44:33.065 ************************************ 00:44:33.065 17:41:33 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:44:33.065 [2024-11-26 17:41:33.514059] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:33.065 [2024-11-26 17:41:33.514177] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90445 ] 00:44:33.065 [2024-11-26 17:41:33.687387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:33.324 [2024-11-26 17:41:33.827914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:33.893 [2024-11-26 17:41:34.526479] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:44:33.893 [2024-11-26 17:41:34.526545] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:44:33.893 [2024-11-26 17:41:34.526565] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:44:33.893 [2024-11-26 17:41:34.527141] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:44:33.893 [2024-11-26 17:41:34.527324] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:44:33.893 [2024-11-26 17:41:34.527346] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:44:33.893 [2024-11-26 17:41:34.527411] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:44:33.893 00:44:33.893 [2024-11-26 17:41:34.527433] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:44:35.802 00:44:35.802 real 0m2.902s 00:44:35.802 user 0m2.418s 00:44:35.802 sys 0m0.359s 00:44:35.802 17:41:36 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:35.802 17:41:36 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:44:35.802 ************************************ 00:44:35.802 END TEST bdev_hello_world 00:44:35.802 ************************************ 00:44:35.802 17:41:36 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:44:35.802 17:41:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:35.802 17:41:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:35.802 17:41:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:35.802 ************************************ 00:44:35.802 START TEST bdev_bounds 00:44:35.802 ************************************ 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90488 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90488' 00:44:35.802 Process bdevio pid: 90488 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90488 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90488 ']' 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:35.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:35.802 17:41:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:44:36.062 [2024-11-26 17:41:36.505602] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:36.062 [2024-11-26 17:41:36.505840] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90488 ] 00:44:36.062 [2024-11-26 17:41:36.686145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:36.321 [2024-11-26 17:41:36.842242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:36.321 [2024-11-26 17:41:36.842485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:36.321 [2024-11-26 17:41:36.842442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:36.889 17:41:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:36.889 17:41:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:44:36.889 17:41:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:44:37.148 I/O targets: 00:44:37.148 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:44:37.148 00:44:37.148 00:44:37.148 CUnit - A unit testing framework for C - Version 2.1-3 00:44:37.148 http://cunit.sourceforge.net/ 00:44:37.148 00:44:37.148 00:44:37.148 Suite: bdevio tests on: raid5f 00:44:37.148 Test: blockdev write read block ...passed 00:44:37.148 Test: blockdev write zeroes read block ...passed 00:44:37.148 Test: blockdev write zeroes read no split ...passed 00:44:37.148 Test: blockdev write zeroes read split ...passed 00:44:37.407 Test: blockdev write zeroes read split partial ...passed 00:44:37.407 Test: blockdev reset ...passed 00:44:37.407 Test: blockdev write read 8 blocks ...passed 00:44:37.407 Test: blockdev write read size > 128k ...passed 00:44:37.407 Test: blockdev write read invalid size ...passed 00:44:37.407 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:44:37.407 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:44:37.407 Test: blockdev write read max offset ...passed 00:44:37.407 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:44:37.407 Test: blockdev writev readv 8 blocks ...passed 00:44:37.407 Test: blockdev writev readv 30 x 1block ...passed 00:44:37.407 Test: blockdev writev readv block ...passed 00:44:37.407 Test: blockdev writev readv size > 128k ...passed 00:44:37.407 Test: blockdev writev readv size > 128k in two iovs ...passed 00:44:37.407 Test: blockdev comparev and writev ...passed 00:44:37.407 Test: blockdev nvme passthru rw ...passed 00:44:37.407 Test: blockdev nvme passthru vendor specific ...passed 00:44:37.407 Test: blockdev nvme admin passthru ...passed 00:44:37.407 Test: blockdev copy ...passed 00:44:37.407 00:44:37.407 Run Summary: Type Total Ran Passed Failed Inactive 00:44:37.407 suites 1 1 n/a 0 0 00:44:37.407 tests 23 23 23 0 0 00:44:37.407 asserts 130 130 130 0 n/a 00:44:37.407 00:44:37.407 Elapsed time = 0.725 seconds 00:44:37.407 0 00:44:37.407 17:41:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90488 00:44:37.407 17:41:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90488 ']' 00:44:37.407 17:41:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90488 00:44:37.407 17:41:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:44:37.407 17:41:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:37.407 17:41:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90488 00:44:37.407 17:41:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:37.407 17:41:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:37.407 17:41:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90488' 00:44:37.407 killing process with pid 90488 00:44:37.407 17:41:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90488 00:44:37.407 17:41:38 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90488 00:44:39.349 17:41:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:44:39.349 00:44:39.349 real 0m3.384s 00:44:39.349 user 0m8.384s 00:44:39.349 sys 0m0.510s 00:44:39.349 17:41:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:39.349 ************************************ 00:44:39.349 END TEST bdev_bounds 00:44:39.349 ************************************ 00:44:39.349 17:41:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:44:39.349 17:41:39 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:44:39.349 17:41:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:44:39.349 17:41:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:39.349 17:41:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:39.349 ************************************ 00:44:39.349 START TEST bdev_nbd 00:44:39.349 ************************************ 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90559 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90559 /var/tmp/spdk-nbd.sock 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90559 ']' 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:39.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:39.349 17:41:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:44:39.349 [2024-11-26 17:41:39.984254] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:39.349 [2024-11-26 17:41:39.984572] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:39.609 [2024-11-26 17:41:40.182368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:39.867 [2024-11-26 17:41:40.340100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:44:40.435 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:44:40.694 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:40.953 1+0 records in 00:44:40.953 1+0 records out 00:44:40.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421232 s, 9.7 MB/s 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:44:40.953 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:44:41.211 { 00:44:41.211 "nbd_device": "/dev/nbd0", 00:44:41.211 "bdev_name": "raid5f" 00:44:41.211 } 00:44:41.211 ]' 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:44:41.211 { 00:44:41.211 "nbd_device": "/dev/nbd0", 00:44:41.211 "bdev_name": "raid5f" 00:44:41.211 } 00:44:41.211 ]' 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:41.211 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:41.469 17:41:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:44:41.727 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:44:41.986 /dev/nbd0 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:41.986 1+0 records in 00:44:41.986 1+0 records out 00:44:41.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057505 s, 7.1 MB/s 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:41.986 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:44:42.245 { 00:44:42.245 "nbd_device": "/dev/nbd0", 00:44:42.245 "bdev_name": "raid5f" 00:44:42.245 } 00:44:42.245 ]' 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:44:42.245 { 00:44:42.245 "nbd_device": "/dev/nbd0", 00:44:42.245 "bdev_name": "raid5f" 00:44:42.245 } 00:44:42.245 ]' 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:44:42.245 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:44:42.505 256+0 records in 00:44:42.505 256+0 records out 00:44:42.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139013 s, 75.4 MB/s 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:44:42.505 256+0 records in 00:44:42.505 256+0 records out 00:44:42.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0335882 s, 31.2 MB/s 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:44:42.505 17:41:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:44:42.505 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:44:42.505 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:42.505 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:42.505 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:42.505 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:42.505 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:44:42.505 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:42.505 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:42.763 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:44:43.023 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:44:43.283 malloc_lvol_verify 00:44:43.283 17:41:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:44:43.542 16d665c8-4677-4cd2-b7d7-dd678b79980c 00:44:43.542 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:44:43.801 b29b2786-bfb6-43f0-be7d-7f3bec91c5f5 00:44:43.801 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:44:43.801 /dev/nbd0 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:44:44.060 mke2fs 1.47.0 (5-Feb-2023) 00:44:44.060 Discarding device blocks: 0/4096 done 00:44:44.060 Creating filesystem with 4096 1k blocks and 1024 inodes 00:44:44.060 00:44:44.060 Allocating group tables: 0/1 done 00:44:44.060 Writing inode tables: 0/1 done 00:44:44.060 Creating journal (1024 blocks): done 00:44:44.060 Writing superblocks and filesystem accounting information: 0/1 done 00:44:44.060 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:44.060 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90559 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90559 ']' 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90559 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90559 00:44:44.319 killing process with pid 90559 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90559' 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90559 00:44:44.319 17:41:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90559 00:44:46.227 17:41:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:44:46.227 00:44:46.227 real 0m6.611s 00:44:46.227 user 0m8.928s 00:44:46.227 sys 0m1.563s 00:44:46.227 17:41:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:46.227 ************************************ 00:44:46.227 END TEST bdev_nbd 00:44:46.227 ************************************ 00:44:46.227 17:41:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:44:46.227 17:41:46 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:44:46.227 17:41:46 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:44:46.227 17:41:46 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:44:46.227 17:41:46 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:44:46.227 17:41:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:46.227 17:41:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:46.227 17:41:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:46.227 ************************************ 00:44:46.227 START TEST bdev_fio 00:44:46.227 ************************************ 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:44:46.227 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:44:46.227 ************************************ 00:44:46.227 START TEST bdev_fio_rw_verify 00:44:46.227 ************************************ 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:46.227 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:46.228 17:41:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:44:46.487 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:44:46.487 fio-3.35 00:44:46.487 Starting 1 thread 00:44:58.847 00:44:58.847 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90777: Tue Nov 26 17:41:58 2024 00:44:58.847 read: IOPS=9676, BW=37.8MiB/s (39.6MB/s)(378MiB/10001msec) 00:44:58.847 slat (nsec): min=18523, max=77013, avg=24970.25, stdev=5095.90 00:44:58.847 clat (usec): min=10, max=473, avg=163.44, stdev=65.77 00:44:58.847 lat (usec): min=32, max=521, avg=188.41, stdev=67.84 00:44:58.847 clat percentiles (usec): 00:44:58.847 | 50.000th=[ 159], 99.000th=[ 334], 99.900th=[ 392], 99.990th=[ 441], 00:44:58.847 | 99.999th=[ 474] 00:44:58.847 write: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(393MiB/9891msec); 0 zone resets 00:44:58.847 slat (usec): min=8, max=390, avg=20.86, stdev= 6.75 00:44:58.847 clat (usec): min=74, max=1884, avg=379.45, stdev=86.45 00:44:58.847 lat (usec): min=93, max=2274, avg=400.31, stdev=90.40 00:44:58.847 clat percentiles (usec): 00:44:58.848 | 50.000th=[ 363], 99.000th=[ 627], 99.900th=[ 971], 99.990th=[ 1598], 00:44:58.848 | 99.999th=[ 1778] 00:44:58.848 bw ( KiB/s): min=29232, max=46144, per=98.25%, avg=39993.63, stdev=4918.94, samples=19 00:44:58.848 iops : min= 7308, max=11536, avg=9998.37, stdev=1229.77, samples=19 00:44:58.848 lat (usec) : 20=0.01%, 50=0.01%, 100=9.83%, 250=34.97%, 500=50.66% 00:44:58.848 lat (usec) : 750=4.43%, 1000=0.07% 00:44:58.848 lat (msec) : 2=0.04% 00:44:58.848 cpu : usr=98.78%, sys=0.43%, ctx=21, majf=0, minf=8249 00:44:58.848 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:58.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:58.848 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:58.848 issued rwts: total=96775,100657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:58.848 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:58.848 00:44:58.848 Run status group 0 (all jobs): 00:44:58.848 READ: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=378MiB (396MB), run=10001-10001msec 00:44:58.848 WRITE: bw=39.8MiB/s (41.7MB/s), 39.8MiB/s-39.8MiB/s (41.7MB/s-41.7MB/s), io=393MiB (412MB), run=9891-9891msec 00:44:59.784 ----------------------------------------------------- 00:44:59.784 Suppressions used: 00:44:59.784 count bytes template 00:44:59.784 1 7 /usr/src/fio/parse.c 00:44:59.784 855 82080 /usr/src/fio/iolog.c 00:44:59.784 1 8 libtcmalloc_minimal.so 00:44:59.784 1 904 libcrypto.so 00:44:59.784 ----------------------------------------------------- 00:44:59.784 00:44:59.784 00:44:59.784 real 0m13.454s 00:44:59.784 user 0m13.632s 00:44:59.784 sys 0m0.814s 00:44:59.784 ************************************ 00:44:59.784 END TEST bdev_fio_rw_verify 00:44:59.784 ************************************ 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:59.784 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "817172e9-7f64-4b39-97fd-b12eba111586"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "817172e9-7f64-4b39-97fd-b12eba111586",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "817172e9-7f64-4b39-97fd-b12eba111586",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "22c99141-88b7-4379-8d1c-7dd29cb95dbf",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "836ea76d-1d11-4a0a-a8e4-3cb33b2cf7fb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "13474d27-61e3-4c0e-8117-8f4fb3009245",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:44:59.785 /home/vagrant/spdk_repo/spdk 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:44:59.785 ************************************ 00:44:59.785 END TEST bdev_fio 00:44:59.785 ************************************ 00:44:59.785 00:44:59.785 real 0m13.743s 00:44:59.785 user 0m13.757s 00:44:59.785 sys 0m0.955s 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:59.785 17:42:00 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:44:59.785 17:42:00 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:44:59.785 17:42:00 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:59.785 17:42:00 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:44:59.785 17:42:00 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:59.785 17:42:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:44:59.785 ************************************ 00:44:59.785 START TEST bdev_verify 00:44:59.785 ************************************ 00:44:59.785 17:42:00 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:59.785 [2024-11-26 17:42:00.454451] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:59.785 [2024-11-26 17:42:00.454615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90943 ] 00:45:00.044 [2024-11-26 17:42:00.640021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:00.304 [2024-11-26 17:42:00.804946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:00.304 [2024-11-26 17:42:00.804978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:00.872 Running I/O for 5 seconds... 00:45:03.190 8810.00 IOPS, 34.41 MiB/s [2024-11-26T17:42:04.820Z] 9038.00 IOPS, 35.30 MiB/s [2024-11-26T17:42:05.757Z] 9184.00 IOPS, 35.88 MiB/s [2024-11-26T17:42:06.697Z] 9234.75 IOPS, 36.07 MiB/s [2024-11-26T17:42:06.697Z] 9263.60 IOPS, 36.19 MiB/s 00:45:06.002 Latency(us) 00:45:06.002 [2024-11-26T17:42:06.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:06.002 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:45:06.002 Verification LBA range: start 0x0 length 0x2000 00:45:06.002 raid5f : 5.03 4902.46 19.15 0.00 0.00 39542.98 119.84 30220.97 00:45:06.002 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:06.002 Verification LBA range: start 0x2000 length 0x2000 00:45:06.002 raid5f : 5.03 4357.48 17.02 0.00 0.00 43803.64 186.91 34570.96 00:45:06.002 [2024-11-26T17:42:06.697Z] =================================================================================================================== 00:45:06.002 [2024-11-26T17:42:06.697Z] Total : 9259.94 36.17 0.00 0.00 41549.24 119.84 34570.96 00:45:07.909 00:45:07.909 real 0m7.780s 00:45:07.909 user 0m14.180s 00:45:07.909 sys 0m0.436s 00:45:07.909 17:42:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:07.909 17:42:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:45:07.909 ************************************ 00:45:07.909 END TEST bdev_verify 00:45:07.909 ************************************ 00:45:07.909 17:42:08 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:45:07.909 17:42:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:45:07.909 17:42:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:07.909 17:42:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:45:07.909 ************************************ 00:45:07.909 START TEST bdev_verify_big_io 00:45:07.909 ************************************ 00:45:07.909 17:42:08 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:45:07.909 [2024-11-26 17:42:08.296709] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:45:07.909 [2024-11-26 17:42:08.296839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91042 ] 00:45:07.909 [2024-11-26 17:42:08.476458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:08.169 [2024-11-26 17:42:08.621083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:08.169 [2024-11-26 17:42:08.621131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:08.737 Running I/O for 5 seconds... 00:45:10.650 568.00 IOPS, 35.50 MiB/s [2024-11-26T17:42:12.279Z] 570.50 IOPS, 35.66 MiB/s [2024-11-26T17:42:13.655Z] 592.00 IOPS, 37.00 MiB/s [2024-11-26T17:42:14.592Z] 571.00 IOPS, 35.69 MiB/s [2024-11-26T17:42:14.850Z] 609.20 IOPS, 38.08 MiB/s 00:45:14.155 Latency(us) 00:45:14.155 [2024-11-26T17:42:14.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:14.155 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:45:14.155 Verification LBA range: start 0x0 length 0x200 00:45:14.155 raid5f : 5.36 331.65 20.73 0.00 0.00 9716357.73 375.62 417598.83 00:45:14.155 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:45:14.155 Verification LBA range: start 0x200 length 0x200 00:45:14.155 raid5f : 5.41 281.31 17.58 0.00 0.00 11263606.77 279.03 527493.25 00:45:14.155 [2024-11-26T17:42:14.850Z] =================================================================================================================== 00:45:14.155 [2024-11-26T17:42:14.850Z] Total : 612.96 38.31 0.00 0.00 10429967.74 279.03 527493.25 00:45:16.061 ************************************ 00:45:16.061 END TEST bdev_verify_big_io 00:45:16.061 ************************************ 00:45:16.061 00:45:16.061 real 0m8.219s 00:45:16.061 user 0m15.145s 00:45:16.061 sys 0m0.370s 00:45:16.061 17:42:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:16.061 17:42:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:45:16.061 17:42:16 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:16.061 17:42:16 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:45:16.061 17:42:16 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:16.061 17:42:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:45:16.061 ************************************ 00:45:16.061 START TEST bdev_write_zeroes 00:45:16.061 ************************************ 00:45:16.061 17:42:16 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:16.061 [2024-11-26 17:42:16.587849] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:45:16.061 [2024-11-26 17:42:16.587994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91147 ] 00:45:16.322 [2024-11-26 17:42:16.765656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:16.322 [2024-11-26 17:42:16.912126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:16.893 Running I/O for 1 seconds... 00:45:18.273 25935.00 IOPS, 101.31 MiB/s 00:45:18.273 Latency(us) 00:45:18.273 [2024-11-26T17:42:18.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:18.273 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:45:18.273 raid5f : 1.01 25894.35 101.15 0.00 0.00 4926.64 1559.70 6639.46 00:45:18.273 [2024-11-26T17:42:18.968Z] =================================================================================================================== 00:45:18.273 [2024-11-26T17:42:18.968Z] Total : 25894.35 101.15 0.00 0.00 4926.64 1559.70 6639.46 00:45:19.652 00:45:19.652 real 0m3.733s 00:45:19.652 user 0m3.257s 00:45:19.652 sys 0m0.346s 00:45:19.652 17:42:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:19.652 17:42:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:45:19.652 ************************************ 00:45:19.652 END TEST bdev_write_zeroes 00:45:19.652 ************************************ 00:45:19.652 17:42:20 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:19.652 17:42:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:45:19.652 17:42:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:19.652 17:42:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:45:19.652 ************************************ 00:45:19.652 START TEST bdev_json_nonenclosed 00:45:19.652 ************************************ 00:45:19.653 17:42:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:19.912 [2024-11-26 17:42:20.377011] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:45:19.912 [2024-11-26 17:42:20.377222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91206 ] 00:45:19.912 [2024-11-26 17:42:20.561413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:20.171 [2024-11-26 17:42:20.701062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:20.171 [2024-11-26 17:42:20.701178] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:45:20.171 [2024-11-26 17:42:20.701209] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:45:20.171 [2024-11-26 17:42:20.701220] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:20.431 ************************************ 00:45:20.431 END TEST bdev_json_nonenclosed 00:45:20.431 ************************************ 00:45:20.431 00:45:20.431 real 0m0.701s 00:45:20.431 user 0m0.455s 00:45:20.431 sys 0m0.141s 00:45:20.431 17:42:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:20.431 17:42:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:45:20.431 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:20.431 17:42:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:45:20.431 17:42:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:20.431 17:42:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:45:20.431 ************************************ 00:45:20.431 START TEST bdev_json_nonarray 00:45:20.431 ************************************ 00:45:20.431 17:42:21 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:45:20.689 [2024-11-26 17:42:21.165022] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:45:20.689 [2024-11-26 17:42:21.165235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91237 ] 00:45:20.689 [2024-11-26 17:42:21.343375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:20.948 [2024-11-26 17:42:21.488216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:20.948 [2024-11-26 17:42:21.488343] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:45:20.948 [2024-11-26 17:42:21.488364] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:45:20.948 [2024-11-26 17:42:21.488385] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:45:21.216 00:45:21.216 real 0m0.709s 00:45:21.216 user 0m0.465s 00:45:21.216 sys 0m0.139s 00:45:21.216 ************************************ 00:45:21.216 END TEST bdev_json_nonarray 00:45:21.216 ************************************ 00:45:21.216 17:42:21 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:21.216 17:42:21 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:45:21.216 17:42:21 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:45:21.216 00:45:21.216 real 0m53.482s 00:45:21.216 user 1m11.912s 00:45:21.216 sys 0m6.117s 00:45:21.216 ************************************ 00:45:21.216 END TEST blockdev_raid5f 00:45:21.216 ************************************ 00:45:21.216 17:42:21 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:21.216 17:42:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:45:21.216 17:42:21 -- spdk/autotest.sh@194 -- # uname -s 00:45:21.216 17:42:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:45:21.216 17:42:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:45:21.216 17:42:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:45:21.216 17:42:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:45:21.216 17:42:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:45:21.216 17:42:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:45:21.216 17:42:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:21.216 17:42:21 -- common/autotest_common.sh@10 -- # set +x 00:45:21.476 17:42:21 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:45:21.476 17:42:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:21.476 17:42:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:21.476 17:42:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:45:21.476 17:42:21 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:45:21.476 17:42:21 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:45:21.476 17:42:21 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:45:21.476 17:42:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:21.476 17:42:21 -- common/autotest_common.sh@10 -- # set +x 00:45:21.476 17:42:21 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:45:21.476 17:42:21 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:45:21.476 17:42:21 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:45:21.476 17:42:21 -- common/autotest_common.sh@10 -- # set +x 00:45:23.387 INFO: APP EXITING 00:45:23.387 INFO: killing all VMs 00:45:23.387 INFO: killing vhost app 00:45:23.387 INFO: EXIT DONE 00:45:23.966 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:23.966 Waiting for block devices as requested 00:45:23.966 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:45:23.966 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:45:24.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:24.907 Cleaning 00:45:24.907 Removing: /var/run/dpdk/spdk0/config 00:45:24.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:24.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:24.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:24.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:24.907 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:24.907 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:24.907 Removing: /dev/shm/spdk_tgt_trace.pid56972 00:45:24.907 Removing: /var/run/dpdk/spdk0 00:45:24.907 Removing: /var/run/dpdk/spdk_pid56726 00:45:24.907 Removing: /var/run/dpdk/spdk_pid56972 00:45:24.907 Removing: /var/run/dpdk/spdk_pid57201 00:45:24.907 Removing: /var/run/dpdk/spdk_pid57316 00:45:24.907 Removing: /var/run/dpdk/spdk_pid57372 00:45:24.907 Removing: /var/run/dpdk/spdk_pid57517 00:45:24.907 Removing: /var/run/dpdk/spdk_pid57540 00:45:24.907 Removing: /var/run/dpdk/spdk_pid57750 00:45:24.907 Removing: /var/run/dpdk/spdk_pid57874 00:45:24.907 Removing: /var/run/dpdk/spdk_pid57986 00:45:24.907 Removing: /var/run/dpdk/spdk_pid58119 00:45:24.907 Removing: /var/run/dpdk/spdk_pid58233 00:45:24.907 Removing: /var/run/dpdk/spdk_pid58278 00:45:24.907 Removing: /var/run/dpdk/spdk_pid58314 00:45:24.907 Removing: /var/run/dpdk/spdk_pid58385 00:45:24.907 Removing: /var/run/dpdk/spdk_pid58513 00:45:24.907 Removing: /var/run/dpdk/spdk_pid58990 00:45:24.907 Removing: /var/run/dpdk/spdk_pid59070 00:45:24.907 Removing: /var/run/dpdk/spdk_pid59152 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59182 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59341 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59363 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59528 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59550 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59625 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59643 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59718 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59736 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59942 00:45:25.168 Removing: /var/run/dpdk/spdk_pid59979 00:45:25.168 Removing: /var/run/dpdk/spdk_pid60068 00:45:25.168 Removing: /var/run/dpdk/spdk_pid61443 00:45:25.168 Removing: /var/run/dpdk/spdk_pid61655 00:45:25.168 Removing: /var/run/dpdk/spdk_pid61795 00:45:25.168 Removing: /var/run/dpdk/spdk_pid62445 00:45:25.168 Removing: /var/run/dpdk/spdk_pid62656 00:45:25.168 Removing: /var/run/dpdk/spdk_pid62802 00:45:25.168 Removing: /var/run/dpdk/spdk_pid63445 00:45:25.168 Removing: /var/run/dpdk/spdk_pid63774 00:45:25.168 Removing: /var/run/dpdk/spdk_pid63915 00:45:25.168 Removing: /var/run/dpdk/spdk_pid65306 00:45:25.168 Removing: /var/run/dpdk/spdk_pid65559 00:45:25.168 Removing: /var/run/dpdk/spdk_pid65705 00:45:25.168 Removing: /var/run/dpdk/spdk_pid67091 00:45:25.168 Removing: /var/run/dpdk/spdk_pid67344 00:45:25.168 Removing: /var/run/dpdk/spdk_pid67494 00:45:25.168 Removing: /var/run/dpdk/spdk_pid68892 00:45:25.168 Removing: /var/run/dpdk/spdk_pid69343 00:45:25.168 Removing: /var/run/dpdk/spdk_pid69489 00:45:25.168 Removing: /var/run/dpdk/spdk_pid70991 00:45:25.168 Removing: /var/run/dpdk/spdk_pid71259 00:45:25.168 Removing: /var/run/dpdk/spdk_pid71403 00:45:25.168 Removing: /var/run/dpdk/spdk_pid72901 00:45:25.168 Removing: /var/run/dpdk/spdk_pid73170 00:45:25.168 Removing: /var/run/dpdk/spdk_pid73317 00:45:25.168 Removing: /var/run/dpdk/spdk_pid74808 00:45:25.168 Removing: /var/run/dpdk/spdk_pid75301 00:45:25.168 Removing: /var/run/dpdk/spdk_pid75452 00:45:25.168 Removing: /var/run/dpdk/spdk_pid75590 00:45:25.168 Removing: /var/run/dpdk/spdk_pid76025 00:45:25.168 Removing: /var/run/dpdk/spdk_pid76763 00:45:25.168 Removing: /var/run/dpdk/spdk_pid77139 00:45:25.168 Removing: /var/run/dpdk/spdk_pid77833 00:45:25.168 Removing: /var/run/dpdk/spdk_pid78292 00:45:25.168 Removing: /var/run/dpdk/spdk_pid79051 00:45:25.168 Removing: /var/run/dpdk/spdk_pid79460 00:45:25.168 Removing: /var/run/dpdk/spdk_pid81436 00:45:25.168 Removing: /var/run/dpdk/spdk_pid81874 00:45:25.168 Removing: /var/run/dpdk/spdk_pid82314 00:45:25.168 Removing: /var/run/dpdk/spdk_pid84412 00:45:25.168 Removing: /var/run/dpdk/spdk_pid84903 00:45:25.168 Removing: /var/run/dpdk/spdk_pid85425 00:45:25.168 Removing: /var/run/dpdk/spdk_pid86501 00:45:25.168 Removing: /var/run/dpdk/spdk_pid86831 00:45:25.168 Removing: /var/run/dpdk/spdk_pid87791 00:45:25.168 Removing: /var/run/dpdk/spdk_pid88118 00:45:25.168 Removing: /var/run/dpdk/spdk_pid89070 00:45:25.428 Removing: /var/run/dpdk/spdk_pid89400 00:45:25.428 Removing: /var/run/dpdk/spdk_pid90082 00:45:25.428 Removing: /var/run/dpdk/spdk_pid90368 00:45:25.428 Removing: /var/run/dpdk/spdk_pid90445 00:45:25.428 Removing: /var/run/dpdk/spdk_pid90488 00:45:25.428 Removing: /var/run/dpdk/spdk_pid90756 00:45:25.428 Removing: /var/run/dpdk/spdk_pid90943 00:45:25.428 Removing: /var/run/dpdk/spdk_pid91042 00:45:25.428 Removing: /var/run/dpdk/spdk_pid91147 00:45:25.428 Removing: /var/run/dpdk/spdk_pid91206 00:45:25.428 Removing: /var/run/dpdk/spdk_pid91237 00:45:25.428 Clean 00:45:25.428 17:42:25 -- common/autotest_common.sh@1453 -- # return 0 00:45:25.428 17:42:25 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:25.428 17:42:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:25.428 17:42:25 -- common/autotest_common.sh@10 -- # set +x 00:45:25.428 17:42:26 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:25.428 17:42:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:25.428 17:42:26 -- common/autotest_common.sh@10 -- # set +x 00:45:25.428 17:42:26 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:45:25.428 17:42:26 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:45:25.428 17:42:26 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:45:25.428 17:42:26 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:25.428 17:42:26 -- spdk/autotest.sh@398 -- # hostname 00:45:25.428 17:42:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:45:25.688 geninfo: WARNING: invalid characters removed from testname! 00:45:52.275 17:42:49 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:53.218 17:42:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:55.779 17:42:56 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:58.316 17:42:58 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:00.859 17:43:00 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:02.777 17:43:03 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:46:05.314 17:43:05 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:05.314 17:43:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:46:05.314 17:43:05 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:46:05.314 17:43:05 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:05.314 17:43:05 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:46:05.314 17:43:05 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:46:05.314 + [[ -n 5434 ]] 00:46:05.314 + sudo kill 5434 00:46:05.324 [Pipeline] } 00:46:05.340 [Pipeline] // timeout 00:46:05.346 [Pipeline] } 00:46:05.361 [Pipeline] // stage 00:46:05.367 [Pipeline] } 00:46:05.381 [Pipeline] // catchError 00:46:05.391 [Pipeline] stage 00:46:05.394 [Pipeline] { (Stop VM) 00:46:05.407 [Pipeline] sh 00:46:05.689 + vagrant halt 00:46:08.230 ==> default: Halting domain... 00:46:14.810 [Pipeline] sh 00:46:15.102 + vagrant destroy -f 00:46:18.441 ==> default: Removing domain... 00:46:18.453 [Pipeline] sh 00:46:18.731 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:46:18.739 [Pipeline] } 00:46:18.751 [Pipeline] // stage 00:46:18.756 [Pipeline] } 00:46:18.767 [Pipeline] // dir 00:46:18.772 [Pipeline] } 00:46:18.785 [Pipeline] // wrap 00:46:18.790 [Pipeline] } 00:46:18.801 [Pipeline] // catchError 00:46:18.809 [Pipeline] stage 00:46:18.811 [Pipeline] { (Epilogue) 00:46:18.822 [Pipeline] sh 00:46:19.104 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:25.704 [Pipeline] catchError 00:46:25.707 [Pipeline] { 00:46:25.720 [Pipeline] sh 00:46:26.006 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:26.006 Artifacts sizes are good 00:46:26.016 [Pipeline] } 00:46:26.031 [Pipeline] // catchError 00:46:26.044 [Pipeline] archiveArtifacts 00:46:26.051 Archiving artifacts 00:46:26.188 [Pipeline] cleanWs 00:46:26.203 [WS-CLEANUP] Deleting project workspace... 00:46:26.203 [WS-CLEANUP] Deferred wipeout is used... 00:46:26.209 [WS-CLEANUP] done 00:46:26.211 [Pipeline] } 00:46:26.227 [Pipeline] // stage 00:46:26.234 [Pipeline] } 00:46:26.251 [Pipeline] // node 00:46:26.258 [Pipeline] End of Pipeline 00:46:26.427 Finished: SUCCESS